00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2412 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3677 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.164 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.164 The recommended git tool is: git 00:00:00.164 using credential 00000000-0000-0000-0000-000000000002 00:00:00.173 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.200 Fetching changes from the remote Git repository 00:00:00.203 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.232 Using shallow fetch with depth 1 00:00:00.232 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.232 > git --version # timeout=10 00:00:00.248 > git --version # 'git version 2.39.2' 00:00:00.248 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.257 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.258 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.015 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.029 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.042 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.042 > git config core.sparsecheckout # timeout=10 00:00:08.056 > git read-tree -mu HEAD # timeout=10 00:00:08.073 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.098 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.098 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.224 [Pipeline] Start of Pipeline 00:00:08.236 [Pipeline] library 00:00:08.239 Loading library shm_lib@master 00:00:08.239 Library shm_lib@master is cached. Copying from home. 00:00:08.252 [Pipeline] node 00:00:08.267 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.270 [Pipeline] { 00:00:08.280 [Pipeline] catchError 00:00:08.281 [Pipeline] { 00:00:08.294 [Pipeline] wrap 00:00:08.303 [Pipeline] { 00:00:08.316 [Pipeline] stage 00:00:08.319 [Pipeline] { (Prologue) 00:00:08.371 [Pipeline] echo 00:00:08.372 Node: VM-host-SM38 00:00:08.376 [Pipeline] cleanWs 00:00:08.386 [WS-CLEANUP] Deleting project workspace... 00:00:08.386 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.392 [WS-CLEANUP] done 00:00:08.599 [Pipeline] setCustomBuildProperty 00:00:08.672 [Pipeline] httpRequest 00:00:09.007 [Pipeline] echo 00:00:09.008 Sorcerer 10.211.164.20 is alive 00:00:09.015 [Pipeline] retry 00:00:09.016 [Pipeline] { 00:00:09.026 [Pipeline] httpRequest 00:00:09.031 HttpMethod: GET 00:00:09.032 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.032 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.044 Response Code: HTTP/1.1 200 OK 00:00:09.045 Success: Status code 200 is in the accepted range: 200,404 00:00:09.046 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.984 [Pipeline] } 00:00:14.005 [Pipeline] // retry 00:00:14.013 [Pipeline] sh 00:00:14.303 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.323 [Pipeline] httpRequest 00:00:14.669 [Pipeline] echo 00:00:14.671 Sorcerer 10.211.164.20 is alive 00:00:14.681 [Pipeline] retry 00:00:14.683 [Pipeline] { 00:00:14.698 [Pipeline] httpRequest 00:00:14.703 HttpMethod: GET 00:00:14.704 URL: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:14.704 Sending request to url: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:14.724 Response Code: HTTP/1.1 200 OK 00:00:14.725 Success: Status code 200 is in the accepted range: 200,404 00:00:14.726 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:56.684 [Pipeline] } 00:00:56.702 [Pipeline] // retry 00:00:56.709 [Pipeline] sh 00:00:56.999 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:00.316 [Pipeline] sh 00:01:00.603 + git -C spdk log --oneline -n5 00:01:00.604 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:00.604 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:00.604 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:00.604 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:01:00.604 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:00.628 [Pipeline] withCredentials 00:01:00.640 > git --version # timeout=10 00:01:00.653 > git --version # 'git version 2.39.2' 00:01:00.673 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:00.675 [Pipeline] { 00:01:00.685 [Pipeline] retry 00:01:00.687 [Pipeline] { 00:01:00.702 [Pipeline] sh 00:01:00.987 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:01.002 [Pipeline] } 00:01:01.022 [Pipeline] // retry 00:01:01.029 [Pipeline] } 00:01:01.046 [Pipeline] // withCredentials 00:01:01.058 [Pipeline] httpRequest 00:01:01.530 [Pipeline] echo 00:01:01.532 Sorcerer 10.211.164.20 is alive 00:01:01.544 [Pipeline] retry 00:01:01.547 [Pipeline] { 00:01:01.564 [Pipeline] httpRequest 00:01:01.570 HttpMethod: GET 00:01:01.570 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:01.571 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:01.573 Response Code: HTTP/1.1 200 OK 00:01:01.574 Success: Status code 200 is in the accepted range: 200,404 00:01:01.574 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:09.113 [Pipeline] } 00:01:09.128 [Pipeline] // retry 00:01:09.135 [Pipeline] sh 00:01:09.419 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:10.876 [Pipeline] sh 00:01:11.162 + git -C dpdk log --oneline -n5 00:01:11.162 caf0f5d395 version: 22.11.4 00:01:11.162 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:11.162 dc9c799c7d vhost: fix missing spinlock unlock 00:01:11.162 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:11.162 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:11.182 [Pipeline] writeFile 00:01:11.197 [Pipeline] sh 00:01:11.483 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:11.497 [Pipeline] sh 00:01:11.781 + cat autorun-spdk.conf 00:01:11.781 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.781 SPDK_TEST_NVME=1 00:01:11.781 SPDK_TEST_FTL=1 00:01:11.781 SPDK_TEST_ISAL=1 00:01:11.781 SPDK_RUN_ASAN=1 00:01:11.781 SPDK_RUN_UBSAN=1 00:01:11.781 SPDK_TEST_XNVME=1 00:01:11.781 SPDK_TEST_NVME_FDP=1 00:01:11.781 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:11.781 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:11.781 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:11.790 RUN_NIGHTLY=1 00:01:11.793 [Pipeline] } 00:01:11.807 [Pipeline] // stage 00:01:11.823 [Pipeline] stage 00:01:11.825 [Pipeline] { (Run VM) 00:01:11.839 [Pipeline] sh 00:01:12.126 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:12.126 + echo 'Start stage prepare_nvme.sh' 00:01:12.126 Start stage prepare_nvme.sh 00:01:12.126 + [[ -n 8 ]] 00:01:12.126 + disk_prefix=ex8 00:01:12.126 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:12.126 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:12.126 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:12.126 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.126 ++ SPDK_TEST_NVME=1 00:01:12.126 ++ SPDK_TEST_FTL=1 00:01:12.126 ++ SPDK_TEST_ISAL=1 00:01:12.126 ++ SPDK_RUN_ASAN=1 00:01:12.126 ++ SPDK_RUN_UBSAN=1 00:01:12.126 ++ SPDK_TEST_XNVME=1 00:01:12.126 ++ SPDK_TEST_NVME_FDP=1 00:01:12.126 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:12.126 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:12.126 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.126 ++ RUN_NIGHTLY=1 00:01:12.126 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:12.126 + nvme_files=() 00:01:12.126 + declare -A nvme_files 00:01:12.126 + backend_dir=/var/lib/libvirt/images/backends 00:01:12.126 + nvme_files['nvme.img']=5G 00:01:12.126 + nvme_files['nvme-cmb.img']=5G 00:01:12.126 + nvme_files['nvme-multi0.img']=4G 00:01:12.126 + nvme_files['nvme-multi1.img']=4G 00:01:12.126 + nvme_files['nvme-multi2.img']=4G 00:01:12.126 + nvme_files['nvme-openstack.img']=8G 00:01:12.127 + nvme_files['nvme-zns.img']=5G 00:01:12.127 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:12.127 + (( SPDK_TEST_FTL == 1 )) 00:01:12.127 + nvme_files["nvme-ftl.img"]=6G 00:01:12.127 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:12.127 + nvme_files["nvme-fdp.img"]=1G 00:01:12.127 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:12.127 + for nvme in "${!nvme_files[@]}" 00:01:12.127 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:01:12.127 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.127 + for nvme in "${!nvme_files[@]}" 00:01:12.127 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:01:12.700 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:12.700 + for nvme in "${!nvme_files[@]}" 00:01:12.700 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:01:12.700 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.700 + for nvme in "${!nvme_files[@]}" 00:01:12.700 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:01:12.961 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:12.961 + for nvme in "${!nvme_files[@]}" 00:01:12.961 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:01:12.961 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.961 + for nvme in "${!nvme_files[@]}" 00:01:12.961 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:01:12.961 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.961 + for nvme in "${!nvme_files[@]}" 00:01:12.961 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:01:12.961 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.961 + for nvme in "${!nvme_files[@]}" 00:01:12.961 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:01:13.222 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:13.222 + for nvme in "${!nvme_files[@]}" 00:01:13.222 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:01:13.222 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.222 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:01:13.222 + echo 'End stage prepare_nvme.sh' 00:01:13.222 End stage prepare_nvme.sh 00:01:13.236 [Pipeline] sh 00:01:13.523 + DISTRO=fedora39 00:01:13.523 + CPUS=10 00:01:13.523 + RAM=12288 00:01:13.523 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:13.523 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:13.523 00:01:13.523 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:13.523 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:13.523 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:13.523 HELP=0 00:01:13.523 DRY_RUN=0 00:01:13.523 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:01:13.523 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:13.523 NVME_AUTO_CREATE=0 00:01:13.523 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:01:13.523 NVME_CMB=,,,, 00:01:13.523 NVME_PMR=,,,, 00:01:13.523 NVME_ZNS=,,,, 00:01:13.523 NVME_MS=true,,,, 00:01:13.523 NVME_FDP=,,,on, 00:01:13.523 SPDK_VAGRANT_DISTRO=fedora39 00:01:13.523 SPDK_VAGRANT_VMCPU=10 00:01:13.523 SPDK_VAGRANT_VMRAM=12288 00:01:13.523 SPDK_VAGRANT_PROVIDER=libvirt 00:01:13.523 SPDK_VAGRANT_HTTP_PROXY= 00:01:13.523 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:13.523 SPDK_OPENSTACK_NETWORK=0 00:01:13.523 VAGRANT_PACKAGE_BOX=0 00:01:13.523 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:13.523 FORCE_DISTRO=true 00:01:13.523 VAGRANT_BOX_VERSION= 00:01:13.523 EXTRA_VAGRANTFILES= 00:01:13.523 NIC_MODEL=e1000 00:01:13.523 00:01:13.523 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:13.523 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:15.433 Bringing machine 'default' up with 'libvirt' provider... 00:01:16.006 ==> default: Creating image (snapshot of base box volume). 00:01:16.267 ==> default: Creating domain with the following settings... 00:01:16.267 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732848511_66d8c50afde47431dff3 00:01:16.267 ==> default: -- Domain type: kvm 00:01:16.267 ==> default: -- Cpus: 10 00:01:16.267 ==> default: -- Feature: acpi 00:01:16.267 ==> default: -- Feature: apic 00:01:16.267 ==> default: -- Feature: pae 00:01:16.267 ==> default: -- Memory: 12288M 00:01:16.267 ==> default: -- Memory Backing: hugepages: 00:01:16.267 ==> default: -- Management MAC: 00:01:16.267 ==> default: -- Loader: 00:01:16.267 ==> default: -- Nvram: 00:01:16.267 ==> default: -- Base box: spdk/fedora39 00:01:16.267 ==> default: -- Storage pool: default 00:01:16.267 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732848511_66d8c50afde47431dff3.img (20G) 00:01:16.267 ==> default: -- Volume Cache: default 00:01:16.267 ==> default: -- Kernel: 00:01:16.267 ==> default: -- Initrd: 00:01:16.267 ==> default: -- Graphics Type: vnc 00:01:16.267 ==> default: -- Graphics Port: -1 00:01:16.267 ==> default: -- Graphics IP: 127.0.0.1 00:01:16.267 ==> default: -- Graphics Password: Not defined 00:01:16.267 ==> default: -- Video Type: cirrus 00:01:16.267 ==> default: -- Video VRAM: 9216 00:01:16.267 ==> default: -- Sound Type: 00:01:16.267 ==> default: -- Keymap: en-us 00:01:16.267 ==> default: -- TPM Path: 00:01:16.267 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:16.267 ==> default: -- Command line args: 00:01:16.267 ==> default: -> value=-device, 00:01:16.267 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:16.267 ==> default: -> value=-drive, 00:01:16.267 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:16.267 ==> default: -> value=-device, 00:01:16.267 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:16.267 ==> default: -> value=-device, 00:01:16.267 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:16.267 ==> default: -> value=-drive, 00:01:16.267 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:01:16.267 ==> default: -> value=-device, 00:01:16.267 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.267 ==> default: -> value=-device, 00:01:16.267 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:16.267 ==> default: -> value=-drive, 00:01:16.268 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:16.268 ==> default: -> value=-device, 00:01:16.268 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.268 ==> default: -> value=-drive, 00:01:16.268 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:16.268 ==> default: -> value=-device, 00:01:16.268 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.268 ==> default: -> value=-drive, 00:01:16.268 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:16.268 ==> default: -> value=-device, 00:01:16.268 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.268 ==> default: -> value=-device, 00:01:16.268 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:16.268 ==> default: -> value=-device, 00:01:16.268 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:16.268 ==> default: -> value=-drive, 00:01:16.268 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:16.268 ==> default: -> value=-device, 00:01:16.268 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.528 ==> default: Creating shared folders metadata... 00:01:16.528 ==> default: Starting domain. 00:01:18.442 ==> default: Waiting for domain to get an IP address... 00:01:36.561 ==> default: Waiting for SSH to become available... 00:01:36.561 ==> default: Configuring and enabling network interfaces... 00:01:38.475 default: SSH address: 192.168.121.205:22 00:01:38.475 default: SSH username: vagrant 00:01:38.475 default: SSH auth method: private key 00:01:41.019 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:47.644 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:52.942 ==> default: Mounting SSHFS shared folder... 00:01:54.856 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:54.856 ==> default: Checking Mount.. 00:01:55.797 ==> default: Folder Successfully Mounted! 00:01:55.797 00:01:55.797 SUCCESS! 00:01:55.797 00:01:55.797 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:55.797 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:55.797 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:55.797 00:01:55.808 [Pipeline] } 00:01:55.826 [Pipeline] // stage 00:01:55.837 [Pipeline] dir 00:01:55.838 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:55.840 [Pipeline] { 00:01:55.855 [Pipeline] catchError 00:01:55.857 [Pipeline] { 00:01:55.869 [Pipeline] sh 00:01:56.154 + vagrant ssh-config --host vagrant 00:01:56.154 + sed -ne '/^Host/,$p' 00:01:56.154 + tee ssh_conf 00:01:58.698 Host vagrant 00:01:58.698 HostName 192.168.121.205 00:01:58.698 User vagrant 00:01:58.698 Port 22 00:01:58.698 UserKnownHostsFile /dev/null 00:01:58.698 StrictHostKeyChecking no 00:01:58.698 PasswordAuthentication no 00:01:58.698 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:58.698 IdentitiesOnly yes 00:01:58.698 LogLevel FATAL 00:01:58.698 ForwardAgent yes 00:01:58.698 ForwardX11 yes 00:01:58.698 00:01:58.713 [Pipeline] withEnv 00:01:58.715 [Pipeline] { 00:01:58.729 [Pipeline] sh 00:01:59.013 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:59.013 source /etc/os-release 00:01:59.013 [[ -e /image.version ]] && img=$(< /image.version) 00:01:59.013 # Minimal, systemd-like check. 00:01:59.013 if [[ -e /.dockerenv ]]; then 00:01:59.013 # Clear garbage from the node'\''s name: 00:01:59.013 # agt-er_autotest_547-896 -> autotest_547-896 00:01:59.013 # $HOSTNAME is the actual container id 00:01:59.013 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:59.013 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:59.013 # We can assume this is a mount from a host where container is running, 00:01:59.013 # so fetch its hostname to easily identify the target swarm worker. 00:01:59.013 container="$(< /etc/hostname) ($agent)" 00:01:59.013 else 00:01:59.013 # Fallback 00:01:59.013 container=$agent 00:01:59.013 fi 00:01:59.013 fi 00:01:59.013 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:59.013 ' 00:01:59.300 [Pipeline] } 00:01:59.311 [Pipeline] // withEnv 00:01:59.317 [Pipeline] setCustomBuildProperty 00:01:59.328 [Pipeline] stage 00:01:59.330 [Pipeline] { (Tests) 00:01:59.343 [Pipeline] sh 00:01:59.626 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:59.904 [Pipeline] sh 00:02:00.242 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:00.260 [Pipeline] timeout 00:02:00.261 Timeout set to expire in 50 min 00:02:00.263 [Pipeline] { 00:02:00.277 [Pipeline] sh 00:02:00.564 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:01.136 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:01.151 [Pipeline] sh 00:02:01.440 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:01.718 [Pipeline] sh 00:02:02.005 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:02.285 [Pipeline] sh 00:02:02.575 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:02.836 ++ readlink -f spdk_repo 00:02:02.836 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:02.836 + [[ -n /home/vagrant/spdk_repo ]] 00:02:02.836 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:02.836 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:02.836 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:02.836 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:02.836 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:02.836 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:02.836 + cd /home/vagrant/spdk_repo 00:02:02.836 + source /etc/os-release 00:02:02.836 ++ NAME='Fedora Linux' 00:02:02.836 ++ VERSION='39 (Cloud Edition)' 00:02:02.836 ++ ID=fedora 00:02:02.836 ++ VERSION_ID=39 00:02:02.836 ++ VERSION_CODENAME= 00:02:02.836 ++ PLATFORM_ID=platform:f39 00:02:02.836 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:02.836 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:02.836 ++ LOGO=fedora-logo-icon 00:02:02.836 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:02.836 ++ HOME_URL=https://fedoraproject.org/ 00:02:02.836 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:02.836 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:02.836 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:02.836 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:02.836 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:02.836 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:02.836 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:02.836 ++ SUPPORT_END=2024-11-12 00:02:02.837 ++ VARIANT='Cloud Edition' 00:02:02.837 ++ VARIANT_ID=cloud 00:02:02.837 + uname -a 00:02:02.837 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:02.837 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:03.098 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:03.360 Hugepages 00:02:03.360 node hugesize free / total 00:02:03.360 node0 1048576kB 0 / 0 00:02:03.360 node0 2048kB 0 / 0 00:02:03.360 00:02:03.360 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:03.360 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:03.360 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:03.622 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:03.622 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:03.622 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:03.622 + rm -f /tmp/spdk-ld-path 00:02:03.622 + source autorun-spdk.conf 00:02:03.622 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.622 ++ SPDK_TEST_NVME=1 00:02:03.622 ++ SPDK_TEST_FTL=1 00:02:03.622 ++ SPDK_TEST_ISAL=1 00:02:03.622 ++ SPDK_RUN_ASAN=1 00:02:03.622 ++ SPDK_RUN_UBSAN=1 00:02:03.622 ++ SPDK_TEST_XNVME=1 00:02:03.622 ++ SPDK_TEST_NVME_FDP=1 00:02:03.622 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:03.622 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:03.622 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.622 ++ RUN_NIGHTLY=1 00:02:03.622 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:03.622 + [[ -n '' ]] 00:02:03.622 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:03.622 + for M in /var/spdk/build-*-manifest.txt 00:02:03.622 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:03.622 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.622 + for M in /var/spdk/build-*-manifest.txt 00:02:03.622 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:03.622 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.622 + for M in /var/spdk/build-*-manifest.txt 00:02:03.622 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:03.622 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.622 ++ uname 00:02:03.622 + [[ Linux == \L\i\n\u\x ]] 00:02:03.622 + sudo dmesg -T 00:02:03.622 + sudo dmesg --clear 00:02:03.622 + dmesg_pid=5766 00:02:03.622 + [[ Fedora Linux == FreeBSD ]] 00:02:03.622 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.622 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.622 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:03.622 + sudo dmesg -Tw 00:02:03.622 + [[ -x /usr/src/fio-static/fio ]] 00:02:03.622 + export FIO_BIN=/usr/src/fio-static/fio 00:02:03.622 + FIO_BIN=/usr/src/fio-static/fio 00:02:03.622 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:03.622 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:03.622 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:03.622 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.622 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.622 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:03.622 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.622 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.622 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:03.622 02:49:19 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:03.622 02:49:19 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:03.622 02:49:19 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.622 02:49:19 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:03.622 02:49:19 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:03.622 02:49:19 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:03.622 02:49:19 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:03.622 02:49:19 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:03.622 02:49:19 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:03.622 02:49:19 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:03.622 02:49:19 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:03.622 02:49:19 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:03.622 02:49:19 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.622 02:49:19 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:03.622 02:49:19 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:03.622 02:49:19 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:03.885 02:49:19 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:03.885 02:49:19 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:03.885 02:49:19 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:03.885 02:49:19 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:03.885 02:49:19 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:03.885 02:49:19 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:03.885 02:49:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.885 02:49:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.885 02:49:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.885 02:49:19 -- paths/export.sh@5 -- $ export PATH 00:02:03.885 02:49:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.885 02:49:19 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:03.885 02:49:19 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:03.885 02:49:19 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732848559.XXXXXX 00:02:03.885 02:49:19 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732848559.ZWElqZ 00:02:03.885 02:49:19 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:03.885 02:49:19 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:03.885 02:49:19 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:03.885 02:49:19 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:03.885 02:49:19 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:03.885 02:49:19 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:03.885 02:49:19 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:03.885 02:49:19 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:03.885 02:49:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.885 02:49:19 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:03.885 02:49:19 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:03.885 02:49:19 -- pm/common@17 -- $ local monitor 00:02:03.885 02:49:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.885 02:49:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.885 02:49:19 -- pm/common@25 -- $ sleep 1 00:02:03.885 02:49:19 -- pm/common@21 -- $ date +%s 00:02:03.885 02:49:19 -- pm/common@21 -- $ date +%s 00:02:03.885 02:49:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732848559 00:02:03.885 02:49:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732848559 00:02:03.885 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732848559_collect-cpu-load.pm.log 00:02:03.885 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732848559_collect-vmstat.pm.log 00:02:04.827 02:49:20 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:04.827 02:49:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:04.827 02:49:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:04.827 02:49:20 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:04.827 02:49:20 -- spdk/autobuild.sh@16 -- $ date -u 00:02:04.827 Fri Nov 29 02:49:20 AM UTC 2024 00:02:04.827 02:49:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:04.827 v25.01-pre-276-g35cd3e84d 00:02:04.827 02:49:20 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:04.827 02:49:20 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:04.827 02:49:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:04.827 02:49:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:04.827 02:49:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.827 ************************************ 00:02:04.827 START TEST asan 00:02:04.827 ************************************ 00:02:04.827 using asan 00:02:04.827 02:49:20 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:04.827 00:02:04.827 real 0m0.000s 00:02:04.827 user 0m0.000s 00:02:04.827 sys 0m0.000s 00:02:04.827 ************************************ 00:02:04.827 END TEST asan 00:02:04.827 ************************************ 00:02:04.827 02:49:20 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:04.827 02:49:20 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:04.827 02:49:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:04.827 02:49:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:04.827 02:49:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:04.827 02:49:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:04.827 02:49:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.827 ************************************ 00:02:04.827 START TEST ubsan 00:02:04.827 ************************************ 00:02:04.827 using ubsan 00:02:04.827 02:49:20 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:04.827 00:02:04.827 real 0m0.000s 00:02:04.827 user 0m0.000s 00:02:04.827 sys 0m0.000s 00:02:04.827 02:49:20 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:04.827 ************************************ 00:02:04.827 END TEST ubsan 00:02:04.827 ************************************ 00:02:04.827 02:49:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:05.090 02:49:20 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:05.090 02:49:20 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:05.090 02:49:20 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:05.090 02:49:20 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:05.090 02:49:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:05.090 02:49:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.090 ************************************ 00:02:05.090 START TEST build_native_dpdk 00:02:05.090 ************************************ 00:02:05.090 02:49:20 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:05.090 caf0f5d395 version: 22.11.4 00:02:05.090 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:05.090 dc9c799c7d vhost: fix missing spinlock unlock 00:02:05.090 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:05.090 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:05.090 patching file config/rte_config.h 00:02:05.090 Hunk #1 succeeded at 60 (offset 1 line). 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:05.090 patching file lib/pcapng/rte_pcapng.c 00:02:05.090 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:05.090 02:49:20 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:05.090 02:49:20 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:05.091 02:49:20 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:05.091 02:49:20 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:05.091 02:49:20 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:05.091 02:49:20 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:05.091 02:49:20 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:05.091 02:49:20 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:09.312 The Meson build system 00:02:09.312 Version: 1.5.0 00:02:09.312 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:09.312 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:09.312 Build type: native build 00:02:09.312 Program cat found: YES (/usr/bin/cat) 00:02:09.312 Project name: DPDK 00:02:09.312 Project version: 22.11.4 00:02:09.312 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:09.312 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:09.312 Host machine cpu family: x86_64 00:02:09.312 Host machine cpu: x86_64 00:02:09.312 Message: ## Building in Developer Mode ## 00:02:09.312 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:09.312 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:09.312 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:09.312 Program objdump found: YES (/usr/bin/objdump) 00:02:09.312 Program python3 found: YES (/usr/bin/python3) 00:02:09.312 Program cat found: YES (/usr/bin/cat) 00:02:09.312 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:09.312 Checking for size of "void *" : 8 00:02:09.312 Checking for size of "void *" : 8 (cached) 00:02:09.312 Library m found: YES 00:02:09.312 Library numa found: YES 00:02:09.312 Has header "numaif.h" : YES 00:02:09.312 Library fdt found: NO 00:02:09.312 Library execinfo found: NO 00:02:09.312 Has header "execinfo.h" : YES 00:02:09.312 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:09.312 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:09.312 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:09.312 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:09.312 Run-time dependency openssl found: YES 3.1.1 00:02:09.312 Run-time dependency libpcap found: YES 1.10.4 00:02:09.312 Has header "pcap.h" with dependency libpcap: YES 00:02:09.312 Compiler for C supports arguments -Wcast-qual: YES 00:02:09.312 Compiler for C supports arguments -Wdeprecated: YES 00:02:09.312 Compiler for C supports arguments -Wformat: YES 00:02:09.312 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:09.312 Compiler for C supports arguments -Wformat-security: NO 00:02:09.312 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:09.312 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:09.312 Compiler for C supports arguments -Wnested-externs: YES 00:02:09.312 Compiler for C supports arguments -Wold-style-definition: YES 00:02:09.312 Compiler for C supports arguments -Wpointer-arith: YES 00:02:09.312 Compiler for C supports arguments -Wsign-compare: YES 00:02:09.312 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:09.312 Compiler for C supports arguments -Wundef: YES 00:02:09.312 Compiler for C supports arguments -Wwrite-strings: YES 00:02:09.312 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:09.312 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:09.312 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:09.312 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:09.312 Compiler for C supports arguments -mavx512f: YES 00:02:09.312 Checking if "AVX512 checking" compiles: YES 00:02:09.312 Fetching value of define "__SSE4_2__" : 1 00:02:09.312 Fetching value of define "__AES__" : 1 00:02:09.312 Fetching value of define "__AVX__" : 1 00:02:09.312 Fetching value of define "__AVX2__" : 1 00:02:09.312 Fetching value of define "__AVX512BW__" : 1 00:02:09.312 Fetching value of define "__AVX512CD__" : 1 00:02:09.312 Fetching value of define "__AVX512DQ__" : 1 00:02:09.312 Fetching value of define "__AVX512F__" : 1 00:02:09.312 Fetching value of define "__AVX512VL__" : 1 00:02:09.312 Fetching value of define "__PCLMUL__" : 1 00:02:09.312 Fetching value of define "__RDRND__" : 1 00:02:09.312 Fetching value of define "__RDSEED__" : 1 00:02:09.312 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:09.312 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:09.312 Message: lib/kvargs: Defining dependency "kvargs" 00:02:09.312 Message: lib/telemetry: Defining dependency "telemetry" 00:02:09.312 Checking for function "getentropy" : YES 00:02:09.312 Message: lib/eal: Defining dependency "eal" 00:02:09.312 Message: lib/ring: Defining dependency "ring" 00:02:09.312 Message: lib/rcu: Defining dependency "rcu" 00:02:09.312 Message: lib/mempool: Defining dependency "mempool" 00:02:09.312 Message: lib/mbuf: Defining dependency "mbuf" 00:02:09.312 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:09.312 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:09.312 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:09.312 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:09.312 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:09.312 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:09.312 Compiler for C supports arguments -mpclmul: YES 00:02:09.312 Compiler for C supports arguments -maes: YES 00:02:09.312 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.312 Compiler for C supports arguments -mavx512bw: YES 00:02:09.312 Compiler for C supports arguments -mavx512dq: YES 00:02:09.312 Compiler for C supports arguments -mavx512vl: YES 00:02:09.312 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:09.312 Compiler for C supports arguments -mavx2: YES 00:02:09.312 Compiler for C supports arguments -mavx: YES 00:02:09.312 Message: lib/net: Defining dependency "net" 00:02:09.312 Message: lib/meter: Defining dependency "meter" 00:02:09.312 Message: lib/ethdev: Defining dependency "ethdev" 00:02:09.312 Message: lib/pci: Defining dependency "pci" 00:02:09.312 Message: lib/cmdline: Defining dependency "cmdline" 00:02:09.312 Message: lib/metrics: Defining dependency "metrics" 00:02:09.312 Message: lib/hash: Defining dependency "hash" 00:02:09.312 Message: lib/timer: Defining dependency "timer" 00:02:09.312 Fetching value of define "__AVX2__" : 1 (cached) 00:02:09.312 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:09.312 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:09.312 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:09.312 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:09.312 Message: lib/acl: Defining dependency "acl" 00:02:09.312 Message: lib/bbdev: Defining dependency "bbdev" 00:02:09.312 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:09.312 Run-time dependency libelf found: YES 0.191 00:02:09.312 Message: lib/bpf: Defining dependency "bpf" 00:02:09.312 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:09.312 Message: lib/compressdev: Defining dependency "compressdev" 00:02:09.312 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:09.312 Message: lib/distributor: Defining dependency "distributor" 00:02:09.312 Message: lib/efd: Defining dependency "efd" 00:02:09.312 Message: lib/eventdev: Defining dependency "eventdev" 00:02:09.312 Message: lib/gpudev: Defining dependency "gpudev" 00:02:09.312 Message: lib/gro: Defining dependency "gro" 00:02:09.312 Message: lib/gso: Defining dependency "gso" 00:02:09.312 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:09.312 Message: lib/jobstats: Defining dependency "jobstats" 00:02:09.312 Message: lib/latencystats: Defining dependency "latencystats" 00:02:09.312 Message: lib/lpm: Defining dependency "lpm" 00:02:09.312 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:09.312 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:09.312 Fetching value of define "__AVX512IFMA__" : 1 00:02:09.312 Message: lib/member: Defining dependency "member" 00:02:09.312 Message: lib/pcapng: Defining dependency "pcapng" 00:02:09.312 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:09.312 Message: lib/power: Defining dependency "power" 00:02:09.312 Message: lib/rawdev: Defining dependency "rawdev" 00:02:09.312 Message: lib/regexdev: Defining dependency "regexdev" 00:02:09.312 Message: lib/dmadev: Defining dependency "dmadev" 00:02:09.312 Message: lib/rib: Defining dependency "rib" 00:02:09.312 Message: lib/reorder: Defining dependency "reorder" 00:02:09.312 Message: lib/sched: Defining dependency "sched" 00:02:09.312 Message: lib/security: Defining dependency "security" 00:02:09.312 Message: lib/stack: Defining dependency "stack" 00:02:09.312 Has header "linux/userfaultfd.h" : YES 00:02:09.312 Message: lib/vhost: Defining dependency "vhost" 00:02:09.312 Message: lib/ipsec: Defining dependency "ipsec" 00:02:09.312 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:09.312 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:09.312 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:09.312 Message: lib/fib: Defining dependency "fib" 00:02:09.312 Message: lib/port: Defining dependency "port" 00:02:09.312 Message: lib/pdump: Defining dependency "pdump" 00:02:09.312 Message: lib/table: Defining dependency "table" 00:02:09.312 Message: lib/pipeline: Defining dependency "pipeline" 00:02:09.312 Message: lib/graph: Defining dependency "graph" 00:02:09.312 Message: lib/node: Defining dependency "node" 00:02:09.312 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:09.312 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:09.312 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:09.312 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:09.312 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:09.312 Compiler for C supports arguments -Wno-unused-value: YES 00:02:09.312 Compiler for C supports arguments -Wno-format: YES 00:02:09.312 Compiler for C supports arguments -Wno-format-security: YES 00:02:09.312 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:09.312 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:09.312 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:09.312 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:10.701 Fetching value of define "__AVX2__" : 1 (cached) 00:02:10.701 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:10.701 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:10.701 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:10.701 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:10.701 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:10.701 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:10.701 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:10.701 Configuring doxy-api.conf using configuration 00:02:10.701 Program sphinx-build found: NO 00:02:10.701 Configuring rte_build_config.h using configuration 00:02:10.701 Message: 00:02:10.701 ================= 00:02:10.701 Applications Enabled 00:02:10.701 ================= 00:02:10.701 00:02:10.701 apps: 00:02:10.701 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:10.701 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:10.701 test-security-perf, 00:02:10.701 00:02:10.701 Message: 00:02:10.701 ================= 00:02:10.701 Libraries Enabled 00:02:10.701 ================= 00:02:10.701 00:02:10.701 libs: 00:02:10.701 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:10.701 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:10.701 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:10.701 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:10.701 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:10.701 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:10.701 table, pipeline, graph, node, 00:02:10.701 00:02:10.701 Message: 00:02:10.701 =============== 00:02:10.701 Drivers Enabled 00:02:10.701 =============== 00:02:10.701 00:02:10.701 common: 00:02:10.701 00:02:10.701 bus: 00:02:10.701 pci, vdev, 00:02:10.701 mempool: 00:02:10.701 ring, 00:02:10.701 dma: 00:02:10.701 00:02:10.701 net: 00:02:10.701 i40e, 00:02:10.701 raw: 00:02:10.701 00:02:10.701 crypto: 00:02:10.701 00:02:10.701 compress: 00:02:10.701 00:02:10.701 regex: 00:02:10.701 00:02:10.701 vdpa: 00:02:10.701 00:02:10.701 event: 00:02:10.701 00:02:10.701 baseband: 00:02:10.701 00:02:10.701 gpu: 00:02:10.701 00:02:10.701 00:02:10.701 Message: 00:02:10.701 ================= 00:02:10.701 Content Skipped 00:02:10.701 ================= 00:02:10.701 00:02:10.701 apps: 00:02:10.701 00:02:10.701 libs: 00:02:10.701 kni: explicitly disabled via build config (deprecated lib) 00:02:10.701 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:10.701 00:02:10.701 drivers: 00:02:10.701 common/cpt: not in enabled drivers build config 00:02:10.701 common/dpaax: not in enabled drivers build config 00:02:10.701 common/iavf: not in enabled drivers build config 00:02:10.701 common/idpf: not in enabled drivers build config 00:02:10.701 common/mvep: not in enabled drivers build config 00:02:10.701 common/octeontx: not in enabled drivers build config 00:02:10.701 bus/auxiliary: not in enabled drivers build config 00:02:10.701 bus/dpaa: not in enabled drivers build config 00:02:10.701 bus/fslmc: not in enabled drivers build config 00:02:10.701 bus/ifpga: not in enabled drivers build config 00:02:10.701 bus/vmbus: not in enabled drivers build config 00:02:10.701 common/cnxk: not in enabled drivers build config 00:02:10.701 common/mlx5: not in enabled drivers build config 00:02:10.701 common/qat: not in enabled drivers build config 00:02:10.701 common/sfc_efx: not in enabled drivers build config 00:02:10.701 mempool/bucket: not in enabled drivers build config 00:02:10.701 mempool/cnxk: not in enabled drivers build config 00:02:10.701 mempool/dpaa: not in enabled drivers build config 00:02:10.701 mempool/dpaa2: not in enabled drivers build config 00:02:10.701 mempool/octeontx: not in enabled drivers build config 00:02:10.701 mempool/stack: not in enabled drivers build config 00:02:10.701 dma/cnxk: not in enabled drivers build config 00:02:10.701 dma/dpaa: not in enabled drivers build config 00:02:10.701 dma/dpaa2: not in enabled drivers build config 00:02:10.701 dma/hisilicon: not in enabled drivers build config 00:02:10.701 dma/idxd: not in enabled drivers build config 00:02:10.701 dma/ioat: not in enabled drivers build config 00:02:10.701 dma/skeleton: not in enabled drivers build config 00:02:10.701 net/af_packet: not in enabled drivers build config 00:02:10.701 net/af_xdp: not in enabled drivers build config 00:02:10.701 net/ark: not in enabled drivers build config 00:02:10.701 net/atlantic: not in enabled drivers build config 00:02:10.701 net/avp: not in enabled drivers build config 00:02:10.701 net/axgbe: not in enabled drivers build config 00:02:10.701 net/bnx2x: not in enabled drivers build config 00:02:10.701 net/bnxt: not in enabled drivers build config 00:02:10.701 net/bonding: not in enabled drivers build config 00:02:10.701 net/cnxk: not in enabled drivers build config 00:02:10.701 net/cxgbe: not in enabled drivers build config 00:02:10.701 net/dpaa: not in enabled drivers build config 00:02:10.701 net/dpaa2: not in enabled drivers build config 00:02:10.701 net/e1000: not in enabled drivers build config 00:02:10.701 net/ena: not in enabled drivers build config 00:02:10.701 net/enetc: not in enabled drivers build config 00:02:10.701 net/enetfec: not in enabled drivers build config 00:02:10.701 net/enic: not in enabled drivers build config 00:02:10.701 net/failsafe: not in enabled drivers build config 00:02:10.701 net/fm10k: not in enabled drivers build config 00:02:10.701 net/gve: not in enabled drivers build config 00:02:10.701 net/hinic: not in enabled drivers build config 00:02:10.701 net/hns3: not in enabled drivers build config 00:02:10.701 net/iavf: not in enabled drivers build config 00:02:10.701 net/ice: not in enabled drivers build config 00:02:10.701 net/idpf: not in enabled drivers build config 00:02:10.701 net/igc: not in enabled drivers build config 00:02:10.701 net/ionic: not in enabled drivers build config 00:02:10.701 net/ipn3ke: not in enabled drivers build config 00:02:10.701 net/ixgbe: not in enabled drivers build config 00:02:10.701 net/kni: not in enabled drivers build config 00:02:10.702 net/liquidio: not in enabled drivers build config 00:02:10.702 net/mana: not in enabled drivers build config 00:02:10.702 net/memif: not in enabled drivers build config 00:02:10.702 net/mlx4: not in enabled drivers build config 00:02:10.702 net/mlx5: not in enabled drivers build config 00:02:10.702 net/mvneta: not in enabled drivers build config 00:02:10.702 net/mvpp2: not in enabled drivers build config 00:02:10.702 net/netvsc: not in enabled drivers build config 00:02:10.702 net/nfb: not in enabled drivers build config 00:02:10.702 net/nfp: not in enabled drivers build config 00:02:10.702 net/ngbe: not in enabled drivers build config 00:02:10.702 net/null: not in enabled drivers build config 00:02:10.702 net/octeontx: not in enabled drivers build config 00:02:10.702 net/octeon_ep: not in enabled drivers build config 00:02:10.702 net/pcap: not in enabled drivers build config 00:02:10.702 net/pfe: not in enabled drivers build config 00:02:10.702 net/qede: not in enabled drivers build config 00:02:10.702 net/ring: not in enabled drivers build config 00:02:10.702 net/sfc: not in enabled drivers build config 00:02:10.702 net/softnic: not in enabled drivers build config 00:02:10.702 net/tap: not in enabled drivers build config 00:02:10.702 net/thunderx: not in enabled drivers build config 00:02:10.702 net/txgbe: not in enabled drivers build config 00:02:10.702 net/vdev_netvsc: not in enabled drivers build config 00:02:10.702 net/vhost: not in enabled drivers build config 00:02:10.702 net/virtio: not in enabled drivers build config 00:02:10.702 net/vmxnet3: not in enabled drivers build config 00:02:10.702 raw/cnxk_bphy: not in enabled drivers build config 00:02:10.702 raw/cnxk_gpio: not in enabled drivers build config 00:02:10.702 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:10.702 raw/ifpga: not in enabled drivers build config 00:02:10.702 raw/ntb: not in enabled drivers build config 00:02:10.702 raw/skeleton: not in enabled drivers build config 00:02:10.702 crypto/armv8: not in enabled drivers build config 00:02:10.702 crypto/bcmfs: not in enabled drivers build config 00:02:10.702 crypto/caam_jr: not in enabled drivers build config 00:02:10.702 crypto/ccp: not in enabled drivers build config 00:02:10.702 crypto/cnxk: not in enabled drivers build config 00:02:10.702 crypto/dpaa_sec: not in enabled drivers build config 00:02:10.702 crypto/dpaa2_sec: not in enabled drivers build config 00:02:10.702 crypto/ipsec_mb: not in enabled drivers build config 00:02:10.702 crypto/mlx5: not in enabled drivers build config 00:02:10.702 crypto/mvsam: not in enabled drivers build config 00:02:10.702 crypto/nitrox: not in enabled drivers build config 00:02:10.702 crypto/null: not in enabled drivers build config 00:02:10.702 crypto/octeontx: not in enabled drivers build config 00:02:10.702 crypto/openssl: not in enabled drivers build config 00:02:10.702 crypto/scheduler: not in enabled drivers build config 00:02:10.702 crypto/uadk: not in enabled drivers build config 00:02:10.702 crypto/virtio: not in enabled drivers build config 00:02:10.702 compress/isal: not in enabled drivers build config 00:02:10.702 compress/mlx5: not in enabled drivers build config 00:02:10.702 compress/octeontx: not in enabled drivers build config 00:02:10.702 compress/zlib: not in enabled drivers build config 00:02:10.702 regex/mlx5: not in enabled drivers build config 00:02:10.702 regex/cn9k: not in enabled drivers build config 00:02:10.702 vdpa/ifc: not in enabled drivers build config 00:02:10.702 vdpa/mlx5: not in enabled drivers build config 00:02:10.702 vdpa/sfc: not in enabled drivers build config 00:02:10.702 event/cnxk: not in enabled drivers build config 00:02:10.702 event/dlb2: not in enabled drivers build config 00:02:10.702 event/dpaa: not in enabled drivers build config 00:02:10.702 event/dpaa2: not in enabled drivers build config 00:02:10.702 event/dsw: not in enabled drivers build config 00:02:10.702 event/opdl: not in enabled drivers build config 00:02:10.702 event/skeleton: not in enabled drivers build config 00:02:10.702 event/sw: not in enabled drivers build config 00:02:10.702 event/octeontx: not in enabled drivers build config 00:02:10.702 baseband/acc: not in enabled drivers build config 00:02:10.702 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:10.702 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:10.702 baseband/la12xx: not in enabled drivers build config 00:02:10.702 baseband/null: not in enabled drivers build config 00:02:10.702 baseband/turbo_sw: not in enabled drivers build config 00:02:10.702 gpu/cuda: not in enabled drivers build config 00:02:10.702 00:02:10.702 00:02:10.702 Build targets in project: 309 00:02:10.702 00:02:10.702 DPDK 22.11.4 00:02:10.702 00:02:10.702 User defined options 00:02:10.702 libdir : lib 00:02:10.702 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:10.702 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:10.702 c_link_args : 00:02:10.702 enable_docs : false 00:02:10.702 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:10.702 enable_kmods : false 00:02:10.702 machine : native 00:02:10.702 tests : false 00:02:10.702 00:02:10.702 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:10.702 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:10.964 02:49:26 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:10.964 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:10.964 [1/738] Generating lib/rte_kvargs_mingw with a custom command 00:02:10.964 [2/738] Generating lib/rte_kvargs_def with a custom command 00:02:10.964 [3/738] Generating lib/rte_telemetry_mingw with a custom command 00:02:10.964 [4/738] Generating lib/rte_telemetry_def with a custom command 00:02:10.964 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:10.964 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:10.964 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:10.964 [8/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:10.964 [9/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:10.964 [10/738] Linking static target lib/librte_kvargs.a 00:02:10.964 [11/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:11.224 [12/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:11.224 [13/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:11.224 [14/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:11.224 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:11.224 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:11.224 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:11.224 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:11.224 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:11.224 [20/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.224 [21/738] Linking target lib/librte_kvargs.so.23.0 00:02:11.224 [22/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:11.224 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:11.224 [24/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:11.224 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:11.486 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:11.486 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:11.486 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:11.486 [29/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:11.486 [30/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:11.486 [31/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:11.486 [32/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:11.486 [33/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:11.486 [34/738] Linking static target lib/librte_telemetry.a 00:02:11.486 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:11.486 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:11.486 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:11.774 [38/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:11.774 [39/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:11.774 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:11.774 [41/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:11.774 [42/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.774 [43/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.774 [44/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.774 [45/738] Linking target lib/librte_telemetry.so.23.0 00:02:12.096 [46/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:12.096 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:12.096 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:12.096 [49/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:12.096 [50/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:12.096 [51/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:12.096 [52/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:12.096 [53/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:12.096 [54/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:12.096 [55/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:12.096 [56/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:12.096 [57/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:12.096 [58/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:12.096 [59/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:12.096 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:12.096 [61/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:12.096 [62/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:12.096 [63/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:12.096 [64/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:12.096 [65/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:12.097 [66/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:12.097 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:12.360 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.360 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:12.360 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:12.360 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.360 [72/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.360 [73/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:12.360 [74/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.360 [75/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:12.360 [76/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.360 [77/738] Generating lib/rte_eal_def with a custom command 00:02:12.360 [78/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.360 [79/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:12.360 [80/738] Generating lib/rte_eal_mingw with a custom command 00:02:12.360 [81/738] Generating lib/rte_ring_def with a custom command 00:02:12.360 [82/738] Generating lib/rte_ring_mingw with a custom command 00:02:12.360 [83/738] Generating lib/rte_rcu_def with a custom command 00:02:12.360 [84/738] Generating lib/rte_rcu_mingw with a custom command 00:02:12.360 [85/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:12.360 [86/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.621 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.621 [88/738] Linking static target lib/librte_ring.a 00:02:12.621 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:12.621 [90/738] Generating lib/rte_mempool_def with a custom command 00:02:12.621 [91/738] Generating lib/rte_mempool_mingw with a custom command 00:02:12.621 [92/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:12.621 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.621 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.882 [95/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:12.882 [96/738] Generating lib/rte_mbuf_def with a custom command 00:02:12.882 [97/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:12.882 [98/738] Generating lib/rte_mbuf_mingw with a custom command 00:02:12.882 [99/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:12.882 [100/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:12.882 [101/738] Linking static target lib/librte_eal.a 00:02:12.882 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:12.882 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.143 [104/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.143 [105/738] Linking static target lib/librte_rcu.a 00:02:13.143 [106/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.143 [107/738] Linking static target lib/librte_mempool.a 00:02:13.143 [108/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:13.143 [109/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.143 [110/738] Generating lib/rte_net_def with a custom command 00:02:13.405 [111/738] Generating lib/rte_net_mingw with a custom command 00:02:13.405 [112/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.405 [113/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.405 [114/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:13.405 [115/738] Generating lib/rte_meter_def with a custom command 00:02:13.405 [116/738] Generating lib/rte_meter_mingw with a custom command 00:02:13.405 [117/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:13.405 [118/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:13.405 [119/738] Linking static target lib/librte_meter.a 00:02:13.405 [120/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.667 [121/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.667 [122/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:13.667 [123/738] Linking static target lib/librte_net.a 00:02:13.667 [124/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:13.928 [125/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:13.928 [126/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.928 [127/738] Linking static target lib/librte_mbuf.a 00:02:13.928 [128/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:13.928 [129/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:13.928 [130/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:13.928 [131/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.928 [132/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.189 [133/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:14.451 [134/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:14.451 [135/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.451 [136/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:14.451 [137/738] Generating lib/rte_ethdev_def with a custom command 00:02:14.451 [138/738] Generating lib/rte_ethdev_mingw with a custom command 00:02:14.451 [139/738] Generating lib/rte_pci_def with a custom command 00:02:14.451 [140/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:14.451 [141/738] Generating lib/rte_pci_mingw with a custom command 00:02:14.451 [142/738] Linking static target lib/librte_pci.a 00:02:14.451 [143/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:14.451 [144/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:14.451 [145/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:14.712 [146/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:14.712 [147/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:14.712 [148/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:14.712 [149/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.712 [150/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:14.712 [151/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:14.713 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:14.713 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:14.713 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:14.971 [155/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:14.971 [156/738] Generating lib/rte_cmdline_def with a custom command 00:02:14.971 [157/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:14.971 [158/738] Generating lib/rte_cmdline_mingw with a custom command 00:02:14.971 [159/738] Generating lib/rte_metrics_def with a custom command 00:02:14.971 [160/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:14.971 [161/738] Generating lib/rte_metrics_mingw with a custom command 00:02:14.971 [162/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:14.971 [163/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:14.971 [164/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:14.971 [165/738] Generating lib/rte_hash_def with a custom command 00:02:14.971 [166/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:14.971 [167/738] Linking static target lib/librte_cmdline.a 00:02:14.971 [168/738] Generating lib/rte_hash_mingw with a custom command 00:02:14.971 [169/738] Generating lib/rte_timer_def with a custom command 00:02:14.971 [170/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:15.229 [171/738] Generating lib/rte_timer_mingw with a custom command 00:02:15.229 [172/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:15.229 [173/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:15.229 [174/738] Linking static target lib/librte_metrics.a 00:02:15.229 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:15.229 [176/738] Linking static target lib/librte_timer.a 00:02:15.487 [177/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.487 [178/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.745 [179/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:15.745 [180/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:15.745 [181/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.745 [182/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:15.745 [183/738] Generating lib/rte_acl_def with a custom command 00:02:15.745 [184/738] Generating lib/rte_acl_mingw with a custom command 00:02:15.745 [185/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:15.745 [186/738] Generating lib/rte_bbdev_def with a custom command 00:02:15.745 [187/738] Generating lib/rte_bbdev_mingw with a custom command 00:02:16.003 [188/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:16.003 [189/738] Generating lib/rte_bitratestats_def with a custom command 00:02:16.003 [190/738] Generating lib/rte_bitratestats_mingw with a custom command 00:02:16.003 [191/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.003 [192/738] Linking static target lib/librte_ethdev.a 00:02:16.003 [193/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:16.003 [194/738] Linking static target lib/librte_bitratestats.a 00:02:16.261 [195/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:16.261 [196/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.519 [197/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:16.519 [198/738] Linking static target lib/librte_bbdev.a 00:02:16.519 [199/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:16.519 [200/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:16.776 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:16.776 [202/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:16.776 [203/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.034 [204/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:17.034 [205/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:17.034 [206/738] Linking static target lib/librte_hash.a 00:02:17.034 [207/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:17.291 [208/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:17.291 [209/738] Generating lib/rte_bpf_def with a custom command 00:02:17.291 [210/738] Generating lib/rte_bpf_mingw with a custom command 00:02:17.291 [211/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:17.291 [212/738] Generating lib/rte_cfgfile_def with a custom command 00:02:17.291 [213/738] Generating lib/rte_cfgfile_mingw with a custom command 00:02:17.291 [214/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:17.549 [215/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.549 [216/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:17.549 [217/738] Linking static target lib/librte_cfgfile.a 00:02:17.549 [218/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:17.549 [219/738] Generating lib/rte_compressdev_def with a custom command 00:02:17.549 [220/738] Generating lib/rte_compressdev_mingw with a custom command 00:02:17.808 [221/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.808 [222/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:17.808 [223/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.808 [224/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:17.808 [225/738] Generating lib/rte_cryptodev_def with a custom command 00:02:17.808 [226/738] Generating lib/rte_cryptodev_mingw with a custom command 00:02:17.808 [227/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:17.808 [228/738] Linking static target lib/librte_bpf.a 00:02:17.808 [229/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:17.808 [230/738] Linking static target lib/librte_acl.a 00:02:18.065 [231/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:18.065 [232/738] Linking static target lib/librte_compressdev.a 00:02:18.065 [233/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.065 [234/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:18.065 [235/738] Generating lib/rte_distributor_def with a custom command 00:02:18.065 [236/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.065 [237/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:18.065 [238/738] Generating lib/rte_distributor_mingw with a custom command 00:02:18.065 [239/738] Generating lib/rte_efd_mingw with a custom command 00:02:18.065 [240/738] Generating lib/rte_efd_def with a custom command 00:02:18.322 [241/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:18.322 [242/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:18.322 [243/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:18.322 [244/738] Linking static target lib/librte_distributor.a 00:02:18.322 [245/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.580 [246/738] Linking target lib/librte_eal.so.23.0 00:02:18.581 [247/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:18.581 [248/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:18.581 [249/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.581 [250/738] Linking target lib/librte_ring.so.23.0 00:02:18.581 [251/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:18.581 [252/738] Linking target lib/librte_meter.so.23.0 00:02:18.581 [253/738] Linking target lib/librte_pci.so.23.0 00:02:18.581 [254/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.581 [255/738] Linking target lib/librte_timer.so.23.0 00:02:18.581 [256/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:18.581 [257/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:18.581 [258/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:18.581 [259/738] Linking target lib/librte_acl.so.23.0 00:02:18.838 [260/738] Linking target lib/librte_rcu.so.23.0 00:02:18.838 [261/738] Linking target lib/librte_mempool.so.23.0 00:02:18.838 [262/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:18.838 [263/738] Linking target lib/librte_cfgfile.so.23.0 00:02:18.838 [264/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:18.838 [265/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:18.838 [266/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:18.838 [267/738] Linking target lib/librte_mbuf.so.23.0 00:02:19.095 [268/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:19.095 [269/738] Linking target lib/librte_net.so.23.0 00:02:19.096 [270/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:19.096 [271/738] Linking target lib/librte_bbdev.so.23.0 00:02:19.096 [272/738] Linking target lib/librte_compressdev.so.23.0 00:02:19.096 [273/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:19.096 [274/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:19.096 [275/738] Linking target lib/librte_distributor.so.23.0 00:02:19.096 [276/738] Generating lib/rte_eventdev_def with a custom command 00:02:19.096 [277/738] Generating lib/rte_eventdev_mingw with a custom command 00:02:19.096 [278/738] Linking target lib/librte_hash.so.23.0 00:02:19.096 [279/738] Linking target lib/librte_cmdline.so.23.0 00:02:19.096 [280/738] Generating lib/rte_gpudev_def with a custom command 00:02:19.096 [281/738] Generating lib/rte_gpudev_mingw with a custom command 00:02:19.353 [282/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:19.353 [283/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:19.353 [284/738] Linking static target lib/librte_efd.a 00:02:19.353 [285/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.353 [286/738] Linking target lib/librte_efd.so.23.0 00:02:19.612 [287/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:19.612 [288/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:19.612 [289/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:19.612 [290/738] Linking static target lib/librte_cryptodev.a 00:02:19.612 [291/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.612 [292/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:19.612 [293/738] Generating lib/rte_gro_def with a custom command 00:02:19.612 [294/738] Generating lib/rte_gro_mingw with a custom command 00:02:19.612 [295/738] Linking target lib/librte_ethdev.so.23.0 00:02:19.612 [296/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:19.612 [297/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:19.612 [298/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:19.870 [299/738] Linking static target lib/librte_gpudev.a 00:02:19.870 [300/738] Linking target lib/librte_metrics.so.23.0 00:02:19.870 [301/738] Linking target lib/librte_bpf.so.23.0 00:02:19.870 [302/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:19.870 [303/738] Linking target lib/librte_bitratestats.so.23.0 00:02:19.870 [304/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:19.870 [305/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:19.870 [306/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:19.870 [307/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:20.128 [308/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:20.128 [309/738] Linking static target lib/librte_gro.a 00:02:20.128 [310/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:20.128 [311/738] Generating lib/rte_gso_def with a custom command 00:02:20.128 [312/738] Generating lib/rte_gso_mingw with a custom command 00:02:20.128 [313/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:20.128 [314/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:20.128 [315/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:20.128 [316/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.128 [317/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:20.387 [318/738] Linking target lib/librte_gro.so.23.0 00:02:20.387 [319/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:20.387 [320/738] Linking static target lib/librte_gso.a 00:02:20.387 [321/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.387 [322/738] Linking target lib/librte_gpudev.so.23.0 00:02:20.387 [323/738] Generating lib/rte_ip_frag_def with a custom command 00:02:20.387 [324/738] Generating lib/rte_ip_frag_mingw with a custom command 00:02:20.387 [325/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.387 [326/738] Linking target lib/librte_gso.so.23.0 00:02:20.387 [327/738] Generating lib/rte_jobstats_def with a custom command 00:02:20.387 [328/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:20.387 [329/738] Linking static target lib/librte_jobstats.a 00:02:20.387 [330/738] Generating lib/rte_jobstats_mingw with a custom command 00:02:20.645 [331/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:20.645 [332/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:20.645 [333/738] Generating lib/rte_latencystats_def with a custom command 00:02:20.646 [334/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:20.646 [335/738] Generating lib/rte_latencystats_mingw with a custom command 00:02:20.646 [336/738] Linking static target lib/librte_eventdev.a 00:02:20.646 [337/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:20.646 [338/738] Generating lib/rte_lpm_def with a custom command 00:02:20.646 [339/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:20.646 [340/738] Generating lib/rte_lpm_mingw with a custom command 00:02:20.646 [341/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.646 [342/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:20.903 [343/738] Linking target lib/librte_jobstats.so.23.0 00:02:20.903 [344/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:20.903 [345/738] Linking static target lib/librte_ip_frag.a 00:02:20.903 [346/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:20.903 [347/738] Linking static target lib/librte_latencystats.a 00:02:21.235 [348/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.235 [349/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:21.235 [350/738] Linking target lib/librte_ip_frag.so.23.0 00:02:21.235 [351/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.235 [352/738] Linking target lib/librte_cryptodev.so.23.0 00:02:21.235 [353/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:21.235 [354/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.235 [355/738] Generating lib/rte_member_def with a custom command 00:02:21.235 [356/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:21.235 [357/738] Linking target lib/librte_latencystats.so.23.0 00:02:21.235 [358/738] Generating lib/rte_member_mingw with a custom command 00:02:21.235 [359/738] Generating lib/rte_pcapng_def with a custom command 00:02:21.235 [360/738] Generating lib/rte_pcapng_mingw with a custom command 00:02:21.235 [361/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:21.235 [362/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.235 [363/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.492 [364/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.492 [365/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:21.492 [366/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:21.492 [367/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:21.492 [368/738] Linking static target lib/librte_lpm.a 00:02:21.492 [369/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:21.492 [370/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:21.773 [371/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:21.773 [372/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:21.773 [373/738] Generating lib/rte_power_mingw with a custom command 00:02:21.774 [374/738] Generating lib/rte_power_def with a custom command 00:02:21.774 [375/738] Generating lib/rte_rawdev_def with a custom command 00:02:21.774 [376/738] Generating lib/rte_rawdev_mingw with a custom command 00:02:21.774 [377/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.774 [378/738] Linking target lib/librte_lpm.so.23.0 00:02:21.774 [379/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:21.774 [380/738] Generating lib/rte_regexdev_def with a custom command 00:02:21.774 [381/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:21.774 [382/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:21.774 [383/738] Generating lib/rte_regexdev_mingw with a custom command 00:02:21.774 [384/738] Linking static target lib/librte_pcapng.a 00:02:21.774 [385/738] Generating lib/rte_dmadev_def with a custom command 00:02:21.774 [386/738] Generating lib/rte_dmadev_mingw with a custom command 00:02:21.774 [387/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:21.774 [388/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:22.031 [389/738] Generating lib/rte_rib_def with a custom command 00:02:22.031 [390/738] Generating lib/rte_rib_mingw with a custom command 00:02:22.031 [391/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:22.031 [392/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.031 [393/738] Linking static target lib/librte_rawdev.a 00:02:22.031 [394/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.031 [395/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.031 [396/738] Linking static target lib/librte_power.a 00:02:22.031 [397/738] Linking target lib/librte_pcapng.so.23.0 00:02:22.031 [398/738] Linking target lib/librte_eventdev.so.23.0 00:02:22.031 [399/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:22.031 [400/738] Linking static target lib/librte_member.a 00:02:22.031 [401/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:22.290 [402/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:22.290 [403/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:22.290 [404/738] Linking static target lib/librte_dmadev.a 00:02:22.290 [405/738] Generating lib/rte_reorder_mingw with a custom command 00:02:22.290 [406/738] Generating lib/rte_reorder_def with a custom command 00:02:22.290 [407/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:22.290 [408/738] Linking static target lib/librte_regexdev.a 00:02:22.290 [409/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:22.290 [410/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:22.290 [411/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:22.290 [412/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.290 [413/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.290 [414/738] Generating lib/rte_sched_def with a custom command 00:02:22.290 [415/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:22.290 [416/738] Linking target lib/librte_rawdev.so.23.0 00:02:22.290 [417/738] Generating lib/rte_sched_mingw with a custom command 00:02:22.290 [418/738] Linking target lib/librte_member.so.23.0 00:02:22.548 [419/738] Generating lib/rte_security_def with a custom command 00:02:22.548 [420/738] Generating lib/rte_security_mingw with a custom command 00:02:22.548 [421/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:22.548 [422/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:22.548 [423/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.548 [424/738] Linking static target lib/librte_reorder.a 00:02:22.548 [425/738] Generating lib/rte_stack_def with a custom command 00:02:22.548 [426/738] Generating lib/rte_stack_mingw with a custom command 00:02:22.548 [427/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:22.548 [428/738] Linking static target lib/librte_stack.a 00:02:22.548 [429/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.548 [430/738] Linking target lib/librte_dmadev.so.23.0 00:02:22.548 [431/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:22.548 [432/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:22.548 [433/738] Linking static target lib/librte_rib.a 00:02:22.548 [434/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:22.548 [435/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.807 [436/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.807 [437/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.807 [438/738] Linking target lib/librte_reorder.so.23.0 00:02:22.807 [439/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.807 [440/738] Linking target lib/librte_stack.so.23.0 00:02:22.807 [441/738] Linking target lib/librte_power.so.23.0 00:02:22.807 [442/738] Linking target lib/librte_regexdev.so.23.0 00:02:22.807 [443/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:22.807 [444/738] Generating lib/rte_vhost_def with a custom command 00:02:22.807 [445/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.807 [446/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:22.807 [447/738] Linking static target lib/librte_security.a 00:02:23.065 [448/738] Generating lib/rte_vhost_mingw with a custom command 00:02:23.065 [449/738] Linking target lib/librte_rib.so.23.0 00:02:23.065 [450/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:23.065 [451/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:23.324 [452/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.324 [453/738] Linking target lib/librte_security.so.23.0 00:02:23.324 [454/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:23.324 [455/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:23.324 [456/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:23.324 [457/738] Linking static target lib/librte_sched.a 00:02:23.583 [458/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:23.583 [459/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.583 [460/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:23.583 [461/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:23.583 [462/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:23.583 [463/738] Generating lib/rte_ipsec_mingw with a custom command 00:02:23.583 [464/738] Linking target lib/librte_sched.so.23.0 00:02:23.583 [465/738] Generating lib/rte_ipsec_def with a custom command 00:02:23.583 [466/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:23.843 [467/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:23.843 [468/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:23.843 [469/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:23.843 [470/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:23.843 [471/738] Generating lib/rte_fib_def with a custom command 00:02:23.843 [472/738] Generating lib/rte_fib_mingw with a custom command 00:02:24.102 [473/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:24.102 [474/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:24.360 [475/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:24.360 [476/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:24.360 [477/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:24.360 [478/738] Linking static target lib/librte_ipsec.a 00:02:24.360 [479/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:24.618 [480/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:24.618 [481/738] Linking static target lib/librte_fib.a 00:02:24.618 [482/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.618 [483/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:24.618 [484/738] Linking target lib/librte_ipsec.so.23.0 00:02:24.618 [485/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:24.618 [486/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:24.618 [487/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:24.876 [488/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.876 [489/738] Linking target lib/librte_fib.so.23.0 00:02:24.876 [490/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:25.134 [491/738] Generating lib/rte_port_def with a custom command 00:02:25.134 [492/738] Generating lib/rte_port_mingw with a custom command 00:02:25.134 [493/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:25.134 [494/738] Generating lib/rte_pdump_def with a custom command 00:02:25.134 [495/738] Generating lib/rte_pdump_mingw with a custom command 00:02:25.134 [496/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:25.134 [497/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:25.134 [498/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:25.134 [499/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:25.393 [500/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:25.393 [501/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:25.393 [502/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:25.651 [503/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:25.651 [504/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:25.651 [505/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:25.651 [506/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:25.651 [507/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:25.651 [508/738] Linking static target lib/librte_port.a 00:02:25.651 [509/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:25.651 [510/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:25.651 [511/738] Linking static target lib/librte_pdump.a 00:02:25.909 [512/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:25.909 [513/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.909 [514/738] Linking target lib/librte_pdump.so.23.0 00:02:25.909 [515/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.168 [516/738] Linking target lib/librte_port.so.23.0 00:02:26.168 [517/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:26.168 [518/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:26.168 [519/738] Generating lib/rte_table_def with a custom command 00:02:26.168 [520/738] Generating lib/rte_table_mingw with a custom command 00:02:26.168 [521/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:26.168 [522/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:26.168 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:26.427 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:26.427 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:26.427 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:26.427 [527/738] Generating lib/rte_pipeline_def with a custom command 00:02:26.427 [528/738] Generating lib/rte_pipeline_mingw with a custom command 00:02:26.427 [529/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:26.685 [530/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:26.685 [531/738] Linking static target lib/librte_table.a 00:02:26.685 [532/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:26.685 [533/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:26.943 [534/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:26.943 [535/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:26.943 [536/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:26.943 [537/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.943 [538/738] Generating lib/rte_graph_def with a custom command 00:02:26.943 [539/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:26.943 [540/738] Linking target lib/librte_table.so.23.0 00:02:26.943 [541/738] Generating lib/rte_graph_mingw with a custom command 00:02:27.202 [542/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:27.202 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:27.202 [544/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:27.202 [545/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:27.460 [546/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:27.460 [547/738] Linking static target lib/librte_graph.a 00:02:27.460 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:27.460 [549/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:27.460 [550/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:27.460 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:27.460 [552/738] Generating lib/rte_node_def with a custom command 00:02:27.719 [553/738] Generating lib/rte_node_mingw with a custom command 00:02:27.719 [554/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:27.719 [555/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:27.719 [556/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:27.719 [557/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:27.977 [558/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:27.977 [559/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:27.977 [560/738] Generating drivers/rte_bus_pci_def with a custom command 00:02:27.977 [561/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:27.977 [562/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.977 [563/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:27.977 [564/738] Linking target lib/librte_graph.so.23.0 00:02:27.977 [565/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:27.977 [566/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:27.977 [567/738] Generating drivers/rte_bus_vdev_def with a custom command 00:02:27.977 [568/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:27.977 [569/738] Generating drivers/rte_mempool_ring_def with a custom command 00:02:27.977 [570/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:28.236 [571/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:28.236 [572/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:28.236 [573/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:28.236 [574/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:28.236 [575/738] Linking static target lib/librte_node.a 00:02:28.236 [576/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:28.236 [577/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:28.236 [578/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:28.236 [579/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:28.236 [580/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:28.236 [581/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.236 [582/738] Linking static target drivers/librte_bus_pci.a 00:02:28.236 [583/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.236 [584/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.236 [585/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.236 [586/738] Linking static target drivers/librte_bus_vdev.a 00:02:28.494 [587/738] Linking target lib/librte_node.so.23.0 00:02:28.494 [588/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.494 [589/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.494 [590/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:28.494 [591/738] Linking target drivers/librte_bus_vdev.so.23.0 00:02:28.494 [592/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.752 [593/738] Linking target drivers/librte_bus_pci.so.23.0 00:02:28.752 [594/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:28.753 [595/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:28.753 [596/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:28.753 [597/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:28.753 [598/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:28.753 [599/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:29.011 [600/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:29.011 [601/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.011 [602/738] Linking static target drivers/librte_mempool_ring.a 00:02:29.012 [603/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.012 [604/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:29.012 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:02:29.269 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:29.527 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:29.527 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:29.527 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:29.527 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:30.094 [611/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:30.094 [612/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:30.094 [613/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:30.094 [614/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:30.094 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:30.354 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:30.354 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:02:30.354 [618/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:30.354 [619/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:30.920 [620/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:30.920 [621/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:31.179 [622/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:31.179 [623/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:31.179 [624/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:31.179 [625/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:31.438 [626/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:31.438 [627/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:31.438 [628/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:31.438 [629/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:31.740 [630/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:31.740 [631/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:31.740 [632/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:31.740 [633/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:31.740 [634/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:32.002 [635/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:32.002 [636/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:32.002 [637/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:32.002 [638/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:32.002 [639/738] Linking static target drivers/librte_net_i40e.a 00:02:32.260 [640/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:32.260 [641/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:32.260 [642/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:32.260 [643/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:32.519 [644/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:32.519 [645/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.519 [646/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:32.519 [647/738] Linking target drivers/librte_net_i40e.so.23.0 00:02:32.519 [648/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:32.519 [649/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:32.777 [650/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:32.777 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:32.777 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:32.777 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:33.035 [654/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:33.035 [655/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:33.035 [656/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:33.035 [657/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:33.035 [658/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:33.035 [659/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:33.035 [660/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:33.293 [661/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:33.293 [662/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:33.551 [663/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:33.551 [664/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:33.810 [665/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:33.810 [666/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:33.810 [667/738] Linking static target lib/librte_vhost.a 00:02:34.068 [668/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:34.068 [669/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:34.068 [670/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:34.068 [671/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:34.327 [672/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:34.327 [673/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:34.327 [674/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:34.327 [675/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:34.327 [676/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:34.586 [677/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:34.586 [678/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:34.586 [679/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:34.586 [680/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:34.586 [681/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:34.586 [682/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.843 [683/738] Linking target lib/librte_vhost.so.23.0 00:02:34.843 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:34.843 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:34.843 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:34.843 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:34.843 [688/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:35.102 [689/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:35.102 [690/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:35.102 [691/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:35.102 [692/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:35.360 [693/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:35.360 [694/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:35.617 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:35.617 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:35.617 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:35.617 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:35.875 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:35.875 [700/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:35.875 [701/738] Linking static target lib/librte_pipeline.a 00:02:35.875 [702/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:36.133 [703/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:36.133 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:36.133 [705/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:36.133 [706/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:36.133 [707/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:36.133 [708/738] Linking target app/dpdk-dumpcap 00:02:36.391 [709/738] Linking target app/dpdk-pdump 00:02:36.391 [710/738] Linking target app/dpdk-proc-info 00:02:36.391 [711/738] Linking target app/dpdk-test-acl 00:02:36.391 [712/738] Linking target app/dpdk-test-bbdev 00:02:36.391 [713/738] Linking target app/dpdk-test-cmdline 00:02:36.391 [714/738] Linking target app/dpdk-test-compress-perf 00:02:36.649 [715/738] Linking target app/dpdk-test-crypto-perf 00:02:36.649 [716/738] Linking target app/dpdk-test-eventdev 00:02:36.649 [717/738] Linking target app/dpdk-test-fib 00:02:36.649 [718/738] Linking target app/dpdk-test-flow-perf 00:02:36.649 [719/738] Linking target app/dpdk-test-gpudev 00:02:36.649 [720/738] Linking target app/dpdk-test-pipeline 00:02:36.908 [721/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:36.908 [722/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:37.166 [723/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:37.166 [724/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:37.166 [725/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:37.166 [726/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:37.166 [727/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:37.424 [728/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:37.424 [729/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:37.424 [730/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:37.424 [731/738] Linking target app/dpdk-test-sad 00:02:37.716 [732/738] Linking target app/dpdk-test-regex 00:02:37.999 [733/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.999 [734/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:37.999 [735/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:37.999 [736/738] Linking target lib/librte_pipeline.so.23.0 00:02:37.999 [737/738] Linking target app/dpdk-test-security-perf 00:02:38.258 [738/738] Linking target app/dpdk-testpmd 00:02:38.258 02:49:54 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:38.258 02:49:54 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:38.258 02:49:54 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:38.258 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:38.258 [0/1] Installing files. 00:02:38.522 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.522 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.523 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.524 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.525 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:38.526 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:38.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:38.527 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:38.527 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.527 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.527 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.527 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.527 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.527 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.527 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.527 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.527 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.527 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.527 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.527 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.527 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.527 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.527 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.788 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:38.789 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:38.789 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:38.789 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.789 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:38.789 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.789 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.790 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.791 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:38.792 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:38.792 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:38.792 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:38.792 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:38.792 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:38.792 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:38.792 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:38.792 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:38.792 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:38.792 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:38.792 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:38.792 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:38.792 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:38.792 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:38.792 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:38.792 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:38.792 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:38.792 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:38.792 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:38.792 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:38.792 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:38.792 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:38.792 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:38.792 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:38.792 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:38.792 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:38.792 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:38.792 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:38.792 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:38.792 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:38.792 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:38.792 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:38.793 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:38.793 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:38.793 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:38.793 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:38.793 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:38.793 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:38.793 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:38.793 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:38.793 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:38.793 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:38.793 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:38.793 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:38.793 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:38.793 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:38.793 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:38.793 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:38.793 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:38.793 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:38.793 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:38.793 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:38.793 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:38.793 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:38.793 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:38.793 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:38.793 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:38.793 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:38.793 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:38.793 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:38.793 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:38.793 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:38.793 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:38.793 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:38.793 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:38.793 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:38.793 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:38.793 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:38.793 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:38.793 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:38.793 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:38.793 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:38.793 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:38.793 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:38.793 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:38.793 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:38.793 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:38.793 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:38.793 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:38.793 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:38.793 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:38.793 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:38.793 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:38.793 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:38.793 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:38.793 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:38.793 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:38.793 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:38.793 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:38.793 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:38.793 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:38.793 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:38.793 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:38.793 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:38.793 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:38.793 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:38.793 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:38.793 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:38.793 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:38.793 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:38.793 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:38.793 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:38.793 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:38.793 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:38.793 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:38.793 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:38.793 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:38.793 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:38.793 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:38.793 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:38.793 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:38.793 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:38.793 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:38.793 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:38.793 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:38.793 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:38.793 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:38.793 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:38.794 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:38.794 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:38.794 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:38.794 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:38.794 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:38.794 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:38.794 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:38.794 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:38.794 02:49:54 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:38.794 02:49:54 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:38.794 00:02:38.794 real 0m33.919s 00:02:38.794 user 3m48.502s 00:02:38.794 sys 0m35.478s 00:02:38.794 02:49:54 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:38.794 02:49:54 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:38.794 ************************************ 00:02:38.794 END TEST build_native_dpdk 00:02:38.794 ************************************ 00:02:39.051 02:49:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:39.051 02:49:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:39.051 02:49:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:39.051 02:49:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:39.051 02:49:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:39.051 02:49:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:39.051 02:49:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:39.051 02:49:54 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:02:39.051 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:39.051 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.051 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:39.051 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:39.310 Using 'verbs' RDMA provider 00:02:50.232 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:00.207 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:00.724 Creating mk/config.mk...done. 00:03:00.724 Creating mk/cc.flags.mk...done. 00:03:00.724 Type 'make' to build. 00:03:00.724 02:50:16 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:00.724 02:50:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:00.724 02:50:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:00.724 02:50:16 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.724 ************************************ 00:03:00.724 START TEST make 00:03:00.724 ************************************ 00:03:00.724 02:50:16 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:00.983 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:00.983 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:00.983 meson setup builddir \ 00:03:00.983 -Dwith-libaio=enabled \ 00:03:00.983 -Dwith-liburing=enabled \ 00:03:00.983 -Dwith-libvfn=disabled \ 00:03:00.983 -Dwith-spdk=disabled \ 00:03:00.983 -Dexamples=false \ 00:03:00.983 -Dtests=false \ 00:03:00.983 -Dtools=false && \ 00:03:00.983 meson compile -C builddir && \ 00:03:00.983 cd -) 00:03:00.983 make[1]: Nothing to be done for 'all'. 00:03:02.881 The Meson build system 00:03:02.881 Version: 1.5.0 00:03:02.881 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:02.881 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:02.881 Build type: native build 00:03:02.881 Project name: xnvme 00:03:02.881 Project version: 0.7.5 00:03:02.881 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:02.881 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:02.881 Host machine cpu family: x86_64 00:03:02.881 Host machine cpu: x86_64 00:03:02.881 Message: host_machine.system: linux 00:03:02.881 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:02.881 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:02.881 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:02.881 Run-time dependency threads found: YES 00:03:02.881 Has header "setupapi.h" : NO 00:03:02.881 Has header "linux/blkzoned.h" : YES 00:03:02.881 Has header "linux/blkzoned.h" : YES (cached) 00:03:02.881 Has header "libaio.h" : YES 00:03:02.881 Library aio found: YES 00:03:02.881 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:02.881 Run-time dependency liburing found: YES 2.2 00:03:02.881 Dependency libvfn skipped: feature with-libvfn disabled 00:03:02.881 Found CMake: /usr/bin/cmake (3.27.7) 00:03:02.881 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:02.881 Subproject spdk : skipped: feature with-spdk disabled 00:03:02.881 Run-time dependency appleframeworks found: NO (tried framework) 00:03:02.881 Run-time dependency appleframeworks found: NO (tried framework) 00:03:02.881 Library rt found: YES 00:03:02.881 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:02.881 Configuring xnvme_config.h using configuration 00:03:02.881 Configuring xnvme.spec using configuration 00:03:02.881 Run-time dependency bash-completion found: YES 2.11 00:03:02.881 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:02.881 Program cp found: YES (/usr/bin/cp) 00:03:02.881 Build targets in project: 3 00:03:02.881 00:03:02.881 xnvme 0.7.5 00:03:02.881 00:03:02.881 Subprojects 00:03:02.881 spdk : NO Feature 'with-spdk' disabled 00:03:02.881 00:03:02.881 User defined options 00:03:02.881 examples : false 00:03:02.881 tests : false 00:03:02.881 tools : false 00:03:02.881 with-libaio : enabled 00:03:02.881 with-liburing: enabled 00:03:02.881 with-libvfn : disabled 00:03:02.881 with-spdk : disabled 00:03:02.881 00:03:02.881 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:02.881 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:02.881 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:02.881 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:02.881 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:02.881 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:02.881 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:02.881 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:02.881 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:03.139 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:03.139 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:03.139 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:03.139 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:03.139 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:03.139 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:03.139 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:03.139 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:03.139 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:03.139 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:03.139 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:03.139 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:03.139 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:03.139 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:03.139 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:03.139 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:03.139 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:03.139 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:03.139 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:03.139 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:03.139 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:03.139 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:03.139 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:03.139 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:03.140 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:03.140 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:03.140 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:03.140 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:03.140 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:03.140 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:03.140 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:03.398 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:03.398 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:03.398 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:03.398 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:03.398 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:03.398 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:03.398 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:03.398 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:03.398 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:03.398 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:03.398 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:03.398 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:03.398 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:03.398 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:03.398 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:03.398 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:03.398 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:03.398 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:03.398 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:03.398 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:03.398 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:03.398 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:03.398 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:03.398 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:03.398 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:03.398 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:03.398 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:03.398 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:03.398 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:03.656 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:03.656 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:03.656 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:03.656 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:03.656 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:03.656 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:03.914 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:03.914 [75/76] Linking static target lib/libxnvme.a 00:03:03.914 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:03.914 INFO: autodetecting backend as ninja 00:03:03.914 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:03.914 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:35.997 CC lib/ut_mock/mock.o 00:03:35.997 CC lib/ut/ut.o 00:03:35.997 CC lib/log/log.o 00:03:35.997 CC lib/log/log_deprecated.o 00:03:35.997 CC lib/log/log_flags.o 00:03:35.997 LIB libspdk_ut.a 00:03:35.997 LIB libspdk_ut_mock.a 00:03:35.997 LIB libspdk_log.a 00:03:35.997 SO libspdk_ut.so.2.0 00:03:35.997 SO libspdk_ut_mock.so.6.0 00:03:35.997 SO libspdk_log.so.7.1 00:03:35.997 SYMLINK libspdk_ut.so 00:03:35.997 SYMLINK libspdk_ut_mock.so 00:03:35.997 SYMLINK libspdk_log.so 00:03:35.997 CC lib/dma/dma.o 00:03:35.997 CC lib/util/base64.o 00:03:35.997 CC lib/util/bit_array.o 00:03:35.997 CC lib/util/cpuset.o 00:03:35.997 CC lib/util/crc32.o 00:03:35.997 CC lib/util/crc32c.o 00:03:35.997 CC lib/util/crc16.o 00:03:35.997 CXX lib/trace_parser/trace.o 00:03:35.997 CC lib/ioat/ioat.o 00:03:35.997 CC lib/vfio_user/host/vfio_user_pci.o 00:03:35.997 CC lib/util/crc32_ieee.o 00:03:35.997 CC lib/util/crc64.o 00:03:35.997 CC lib/util/dif.o 00:03:35.997 LIB libspdk_dma.a 00:03:35.997 SO libspdk_dma.so.5.0 00:03:35.997 CC lib/vfio_user/host/vfio_user.o 00:03:35.997 CC lib/util/fd.o 00:03:35.997 SYMLINK libspdk_dma.so 00:03:35.997 CC lib/util/fd_group.o 00:03:35.997 CC lib/util/file.o 00:03:35.997 CC lib/util/hexlify.o 00:03:35.997 CC lib/util/iov.o 00:03:35.997 LIB libspdk_ioat.a 00:03:35.997 CC lib/util/math.o 00:03:35.997 SO libspdk_ioat.so.7.0 00:03:35.997 CC lib/util/net.o 00:03:35.997 CC lib/util/pipe.o 00:03:35.997 SYMLINK libspdk_ioat.so 00:03:35.997 CC lib/util/strerror_tls.o 00:03:35.997 CC lib/util/string.o 00:03:35.997 LIB libspdk_vfio_user.a 00:03:35.997 CC lib/util/uuid.o 00:03:35.997 SO libspdk_vfio_user.so.5.0 00:03:35.997 CC lib/util/xor.o 00:03:35.997 SYMLINK libspdk_vfio_user.so 00:03:35.997 CC lib/util/zipf.o 00:03:35.997 CC lib/util/md5.o 00:03:35.997 LIB libspdk_util.a 00:03:35.997 SO libspdk_util.so.10.1 00:03:35.997 LIB libspdk_trace_parser.a 00:03:35.997 SO libspdk_trace_parser.so.6.0 00:03:35.997 SYMLINK libspdk_util.so 00:03:35.997 SYMLINK libspdk_trace_parser.so 00:03:35.997 CC lib/rdma_utils/rdma_utils.o 00:03:35.997 CC lib/conf/conf.o 00:03:35.997 CC lib/vmd/vmd.o 00:03:35.997 CC lib/vmd/led.o 00:03:35.997 CC lib/json/json_parse.o 00:03:35.997 CC lib/json/json_util.o 00:03:35.997 CC lib/json/json_write.o 00:03:35.997 CC lib/env_dpdk/env.o 00:03:35.997 CC lib/env_dpdk/memory.o 00:03:35.997 CC lib/idxd/idxd.o 00:03:35.997 CC lib/env_dpdk/pci.o 00:03:35.997 CC lib/env_dpdk/init.o 00:03:35.997 LIB libspdk_conf.a 00:03:35.997 SO libspdk_conf.so.6.0 00:03:35.997 LIB libspdk_rdma_utils.a 00:03:35.997 CC lib/env_dpdk/threads.o 00:03:35.997 SYMLINK libspdk_conf.so 00:03:35.997 CC lib/idxd/idxd_user.o 00:03:35.997 SO libspdk_rdma_utils.so.1.0 00:03:35.997 LIB libspdk_json.a 00:03:35.997 SO libspdk_json.so.6.0 00:03:35.997 SYMLINK libspdk_rdma_utils.so 00:03:35.997 CC lib/env_dpdk/pci_ioat.o 00:03:35.997 SYMLINK libspdk_json.so 00:03:35.997 CC lib/env_dpdk/pci_virtio.o 00:03:35.997 CC lib/idxd/idxd_kernel.o 00:03:35.997 CC lib/env_dpdk/pci_vmd.o 00:03:35.997 CC lib/rdma_provider/common.o 00:03:35.997 CC lib/env_dpdk/pci_idxd.o 00:03:35.997 CC lib/env_dpdk/pci_event.o 00:03:35.997 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:35.997 CC lib/jsonrpc/jsonrpc_server.o 00:03:35.997 CC lib/env_dpdk/sigbus_handler.o 00:03:35.997 CC lib/env_dpdk/pci_dpdk.o 00:03:35.997 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:35.997 CC lib/jsonrpc/jsonrpc_client.o 00:03:35.998 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:35.998 LIB libspdk_idxd.a 00:03:35.998 LIB libspdk_vmd.a 00:03:35.998 LIB libspdk_rdma_provider.a 00:03:35.998 SO libspdk_idxd.so.12.1 00:03:35.998 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:35.998 SO libspdk_rdma_provider.so.7.0 00:03:35.998 SO libspdk_vmd.so.6.0 00:03:35.998 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:35.998 SYMLINK libspdk_rdma_provider.so 00:03:35.998 SYMLINK libspdk_idxd.so 00:03:35.998 SYMLINK libspdk_vmd.so 00:03:35.998 LIB libspdk_jsonrpc.a 00:03:35.998 SO libspdk_jsonrpc.so.6.0 00:03:35.998 SYMLINK libspdk_jsonrpc.so 00:03:35.998 CC lib/rpc/rpc.o 00:03:36.256 LIB libspdk_env_dpdk.a 00:03:36.256 LIB libspdk_rpc.a 00:03:36.256 SO libspdk_rpc.so.6.0 00:03:36.256 SO libspdk_env_dpdk.so.15.1 00:03:36.256 SYMLINK libspdk_rpc.so 00:03:36.515 SYMLINK libspdk_env_dpdk.so 00:03:36.515 CC lib/notify/notify.o 00:03:36.515 CC lib/notify/notify_rpc.o 00:03:36.515 CC lib/trace/trace_rpc.o 00:03:36.515 CC lib/trace/trace.o 00:03:36.515 CC lib/keyring/keyring.o 00:03:36.515 CC lib/trace/trace_flags.o 00:03:36.515 CC lib/keyring/keyring_rpc.o 00:03:36.515 LIB libspdk_notify.a 00:03:36.515 SO libspdk_notify.so.6.0 00:03:36.774 LIB libspdk_keyring.a 00:03:36.774 SO libspdk_keyring.so.2.0 00:03:36.774 SYMLINK libspdk_notify.so 00:03:36.774 LIB libspdk_trace.a 00:03:36.774 SYMLINK libspdk_keyring.so 00:03:36.774 SO libspdk_trace.so.11.0 00:03:36.774 SYMLINK libspdk_trace.so 00:03:37.034 CC lib/sock/sock_rpc.o 00:03:37.034 CC lib/sock/sock.o 00:03:37.034 CC lib/thread/thread.o 00:03:37.034 CC lib/thread/iobuf.o 00:03:37.296 LIB libspdk_sock.a 00:03:37.555 SO libspdk_sock.so.10.0 00:03:37.555 SYMLINK libspdk_sock.so 00:03:37.815 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:37.815 CC lib/nvme/nvme_ctrlr.o 00:03:37.815 CC lib/nvme/nvme_ns_cmd.o 00:03:37.815 CC lib/nvme/nvme_fabric.o 00:03:37.815 CC lib/nvme/nvme_ns.o 00:03:37.815 CC lib/nvme/nvme_qpair.o 00:03:37.815 CC lib/nvme/nvme_pcie_common.o 00:03:37.815 CC lib/nvme/nvme_pcie.o 00:03:37.815 CC lib/nvme/nvme.o 00:03:38.382 CC lib/nvme/nvme_quirks.o 00:03:38.382 CC lib/nvme/nvme_transport.o 00:03:38.382 CC lib/nvme/nvme_discovery.o 00:03:38.382 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:38.382 LIB libspdk_thread.a 00:03:38.382 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:38.382 CC lib/nvme/nvme_tcp.o 00:03:38.641 SO libspdk_thread.so.11.0 00:03:38.641 CC lib/nvme/nvme_opal.o 00:03:38.641 SYMLINK libspdk_thread.so 00:03:38.641 CC lib/nvme/nvme_io_msg.o 00:03:38.641 CC lib/nvme/nvme_poll_group.o 00:03:38.641 CC lib/nvme/nvme_zns.o 00:03:38.899 CC lib/nvme/nvme_stubs.o 00:03:38.899 CC lib/nvme/nvme_auth.o 00:03:38.899 CC lib/nvme/nvme_cuse.o 00:03:39.157 CC lib/nvme/nvme_rdma.o 00:03:39.157 CC lib/accel/accel.o 00:03:39.157 CC lib/blob/blobstore.o 00:03:39.157 CC lib/accel/accel_rpc.o 00:03:39.415 CC lib/init/json_config.o 00:03:39.415 CC lib/accel/accel_sw.o 00:03:39.415 CC lib/blob/request.o 00:03:39.674 CC lib/init/subsystem.o 00:03:39.674 CC lib/init/subsystem_rpc.o 00:03:39.674 CC lib/init/rpc.o 00:03:39.674 CC lib/blob/zeroes.o 00:03:39.933 CC lib/blob/blob_bs_dev.o 00:03:39.933 CC lib/virtio/virtio.o 00:03:39.933 CC lib/virtio/virtio_vhost_user.o 00:03:39.933 LIB libspdk_init.a 00:03:39.933 CC lib/virtio/virtio_vfio_user.o 00:03:39.933 SO libspdk_init.so.6.0 00:03:39.933 CC lib/virtio/virtio_pci.o 00:03:39.933 SYMLINK libspdk_init.so 00:03:40.190 LIB libspdk_nvme.a 00:03:40.190 CC lib/fsdev/fsdev.o 00:03:40.191 CC lib/fsdev/fsdev_io.o 00:03:40.191 CC lib/event/app.o 00:03:40.191 CC lib/event/reactor.o 00:03:40.191 CC lib/event/log_rpc.o 00:03:40.191 CC lib/fsdev/fsdev_rpc.o 00:03:40.191 LIB libspdk_virtio.a 00:03:40.191 SO libspdk_nvme.so.15.0 00:03:40.191 SO libspdk_virtio.so.7.0 00:03:40.191 LIB libspdk_accel.a 00:03:40.449 CC lib/event/app_rpc.o 00:03:40.449 SYMLINK libspdk_virtio.so 00:03:40.449 SO libspdk_accel.so.16.0 00:03:40.449 CC lib/event/scheduler_static.o 00:03:40.449 SYMLINK libspdk_accel.so 00:03:40.449 SYMLINK libspdk_nvme.so 00:03:40.449 CC lib/bdev/bdev.o 00:03:40.708 CC lib/bdev/bdev_rpc.o 00:03:40.708 CC lib/bdev/bdev_zone.o 00:03:40.708 CC lib/bdev/part.o 00:03:40.708 CC lib/bdev/scsi_nvme.o 00:03:40.708 LIB libspdk_event.a 00:03:40.708 SO libspdk_event.so.14.0 00:03:40.708 SYMLINK libspdk_event.so 00:03:40.708 LIB libspdk_fsdev.a 00:03:40.708 SO libspdk_fsdev.so.2.0 00:03:40.708 SYMLINK libspdk_fsdev.so 00:03:40.966 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:41.899 LIB libspdk_fuse_dispatcher.a 00:03:41.899 SO libspdk_fuse_dispatcher.so.1.0 00:03:41.899 SYMLINK libspdk_fuse_dispatcher.so 00:03:42.466 LIB libspdk_blob.a 00:03:42.724 SO libspdk_blob.so.12.0 00:03:42.724 SYMLINK libspdk_blob.so 00:03:42.982 CC lib/lvol/lvol.o 00:03:42.982 CC lib/blobfs/blobfs.o 00:03:42.982 CC lib/blobfs/tree.o 00:03:43.548 LIB libspdk_bdev.a 00:03:43.548 SO libspdk_bdev.so.17.0 00:03:43.548 SYMLINK libspdk_bdev.so 00:03:43.877 CC lib/nvmf/ctrlr.o 00:03:43.877 CC lib/nvmf/ctrlr_bdev.o 00:03:43.877 CC lib/nvmf/subsystem.o 00:03:43.878 CC lib/nvmf/ctrlr_discovery.o 00:03:43.878 CC lib/ftl/ftl_core.o 00:03:43.878 CC lib/scsi/dev.o 00:03:43.878 CC lib/nbd/nbd.o 00:03:43.878 CC lib/ublk/ublk.o 00:03:43.878 LIB libspdk_blobfs.a 00:03:43.878 LIB libspdk_lvol.a 00:03:43.878 SO libspdk_blobfs.so.11.0 00:03:43.878 SO libspdk_lvol.so.11.0 00:03:43.878 SYMLINK libspdk_blobfs.so 00:03:43.878 CC lib/scsi/lun.o 00:03:43.878 SYMLINK libspdk_lvol.so 00:03:43.878 CC lib/ublk/ublk_rpc.o 00:03:43.878 CC lib/nbd/nbd_rpc.o 00:03:44.136 CC lib/nvmf/nvmf.o 00:03:44.136 CC lib/scsi/port.o 00:03:44.136 CC lib/ftl/ftl_init.o 00:03:44.136 CC lib/ftl/ftl_layout.o 00:03:44.136 LIB libspdk_nbd.a 00:03:44.136 SO libspdk_nbd.so.7.0 00:03:44.136 CC lib/scsi/scsi.o 00:03:44.136 CC lib/ftl/ftl_debug.o 00:03:44.136 SYMLINK libspdk_nbd.so 00:03:44.136 CC lib/scsi/scsi_bdev.o 00:03:44.395 CC lib/scsi/scsi_pr.o 00:03:44.395 CC lib/nvmf/nvmf_rpc.o 00:03:44.395 LIB libspdk_ublk.a 00:03:44.395 CC lib/ftl/ftl_io.o 00:03:44.395 CC lib/ftl/ftl_sb.o 00:03:44.395 SO libspdk_ublk.so.3.0 00:03:44.395 CC lib/nvmf/transport.o 00:03:44.395 SYMLINK libspdk_ublk.so 00:03:44.395 CC lib/nvmf/tcp.o 00:03:44.653 CC lib/ftl/ftl_l2p.o 00:03:44.653 CC lib/ftl/ftl_l2p_flat.o 00:03:44.653 CC lib/scsi/scsi_rpc.o 00:03:44.654 CC lib/nvmf/stubs.o 00:03:44.912 CC lib/scsi/task.o 00:03:44.912 CC lib/ftl/ftl_nv_cache.o 00:03:44.912 CC lib/ftl/ftl_band.o 00:03:44.912 CC lib/nvmf/mdns_server.o 00:03:44.912 CC lib/nvmf/rdma.o 00:03:44.912 LIB libspdk_scsi.a 00:03:44.912 CC lib/nvmf/auth.o 00:03:44.912 SO libspdk_scsi.so.9.0 00:03:45.170 SYMLINK libspdk_scsi.so 00:03:45.170 CC lib/ftl/ftl_band_ops.o 00:03:45.170 CC lib/ftl/ftl_writer.o 00:03:45.170 CC lib/ftl/ftl_rq.o 00:03:45.170 CC lib/iscsi/conn.o 00:03:45.428 CC lib/ftl/ftl_reloc.o 00:03:45.428 CC lib/ftl/ftl_l2p_cache.o 00:03:45.428 CC lib/vhost/vhost.o 00:03:45.428 CC lib/ftl/ftl_p2l.o 00:03:45.428 CC lib/iscsi/init_grp.o 00:03:45.686 CC lib/iscsi/iscsi.o 00:03:45.686 CC lib/vhost/vhost_rpc.o 00:03:45.686 CC lib/iscsi/param.o 00:03:45.943 CC lib/ftl/ftl_p2l_log.o 00:03:45.943 CC lib/iscsi/portal_grp.o 00:03:45.943 CC lib/iscsi/tgt_node.o 00:03:45.943 CC lib/vhost/vhost_scsi.o 00:03:45.943 CC lib/iscsi/iscsi_subsystem.o 00:03:45.943 CC lib/ftl/mngt/ftl_mngt.o 00:03:46.201 CC lib/iscsi/iscsi_rpc.o 00:03:46.201 CC lib/iscsi/task.o 00:03:46.201 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:46.201 CC lib/vhost/vhost_blk.o 00:03:46.201 CC lib/vhost/rte_vhost_user.o 00:03:46.201 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:46.201 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:46.460 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:46.460 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:46.460 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:46.460 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:46.460 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:46.460 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:46.460 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:46.718 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:46.718 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:46.718 CC lib/ftl/utils/ftl_conf.o 00:03:46.718 LIB libspdk_nvmf.a 00:03:46.718 CC lib/ftl/utils/ftl_md.o 00:03:46.718 CC lib/ftl/utils/ftl_mempool.o 00:03:46.718 CC lib/ftl/utils/ftl_bitmap.o 00:03:46.718 LIB libspdk_iscsi.a 00:03:46.718 SO libspdk_nvmf.so.20.0 00:03:46.976 CC lib/ftl/utils/ftl_property.o 00:03:46.976 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:46.976 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:46.976 SO libspdk_iscsi.so.8.0 00:03:46.976 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:46.976 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:46.976 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:46.976 LIB libspdk_vhost.a 00:03:46.976 SYMLINK libspdk_nvmf.so 00:03:46.976 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:46.976 SYMLINK libspdk_iscsi.so 00:03:46.976 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:46.976 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:46.976 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:46.976 SO libspdk_vhost.so.8.0 00:03:46.976 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:47.235 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:47.235 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:47.235 SYMLINK libspdk_vhost.so 00:03:47.235 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:47.235 CC lib/ftl/base/ftl_base_dev.o 00:03:47.235 CC lib/ftl/base/ftl_base_bdev.o 00:03:47.235 CC lib/ftl/ftl_trace.o 00:03:47.494 LIB libspdk_ftl.a 00:03:47.494 SO libspdk_ftl.so.9.0 00:03:47.752 SYMLINK libspdk_ftl.so 00:03:48.010 CC module/env_dpdk/env_dpdk_rpc.o 00:03:48.010 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:48.010 CC module/fsdev/aio/fsdev_aio.o 00:03:48.010 CC module/keyring/linux/keyring.o 00:03:48.010 CC module/keyring/file/keyring.o 00:03:48.010 CC module/sock/posix/posix.o 00:03:48.010 CC module/accel/error/accel_error.o 00:03:48.010 CC module/blob/bdev/blob_bdev.o 00:03:48.010 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:48.010 CC module/scheduler/gscheduler/gscheduler.o 00:03:48.268 LIB libspdk_env_dpdk_rpc.a 00:03:48.268 SO libspdk_env_dpdk_rpc.so.6.0 00:03:48.268 CC module/keyring/linux/keyring_rpc.o 00:03:48.268 CC module/keyring/file/keyring_rpc.o 00:03:48.268 SYMLINK libspdk_env_dpdk_rpc.so 00:03:48.268 LIB libspdk_scheduler_dpdk_governor.a 00:03:48.268 CC module/accel/error/accel_error_rpc.o 00:03:48.268 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:48.268 LIB libspdk_scheduler_dynamic.a 00:03:48.268 LIB libspdk_scheduler_gscheduler.a 00:03:48.268 SO libspdk_scheduler_gscheduler.so.4.0 00:03:48.268 SO libspdk_scheduler_dynamic.so.4.0 00:03:48.268 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:48.268 LIB libspdk_blob_bdev.a 00:03:48.268 SYMLINK libspdk_scheduler_gscheduler.so 00:03:48.268 LIB libspdk_keyring_file.a 00:03:48.268 SYMLINK libspdk_scheduler_dynamic.so 00:03:48.268 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:48.268 LIB libspdk_keyring_linux.a 00:03:48.268 SO libspdk_blob_bdev.so.12.0 00:03:48.268 SO libspdk_keyring_file.so.2.0 00:03:48.268 CC module/accel/ioat/accel_ioat.o 00:03:48.268 SO libspdk_keyring_linux.so.1.0 00:03:48.268 LIB libspdk_accel_error.a 00:03:48.268 SO libspdk_accel_error.so.2.0 00:03:48.526 SYMLINK libspdk_blob_bdev.so 00:03:48.526 SYMLINK libspdk_keyring_linux.so 00:03:48.526 SYMLINK libspdk_keyring_file.so 00:03:48.526 CC module/accel/ioat/accel_ioat_rpc.o 00:03:48.526 SYMLINK libspdk_accel_error.so 00:03:48.526 CC module/fsdev/aio/linux_aio_mgr.o 00:03:48.526 CC module/accel/dsa/accel_dsa.o 00:03:48.526 CC module/accel/iaa/accel_iaa.o 00:03:48.526 CC module/accel/iaa/accel_iaa_rpc.o 00:03:48.526 CC module/accel/dsa/accel_dsa_rpc.o 00:03:48.526 LIB libspdk_accel_ioat.a 00:03:48.526 SO libspdk_accel_ioat.so.6.0 00:03:48.526 SYMLINK libspdk_accel_ioat.so 00:03:48.526 LIB libspdk_accel_iaa.a 00:03:48.526 LIB libspdk_fsdev_aio.a 00:03:48.526 SO libspdk_accel_iaa.so.3.0 00:03:48.784 CC module/blobfs/bdev/blobfs_bdev.o 00:03:48.784 SO libspdk_fsdev_aio.so.1.0 00:03:48.784 CC module/bdev/delay/vbdev_delay.o 00:03:48.784 LIB libspdk_sock_posix.a 00:03:48.784 SYMLINK libspdk_accel_iaa.so 00:03:48.784 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:48.784 CC module/bdev/error/vbdev_error.o 00:03:48.784 SO libspdk_sock_posix.so.6.0 00:03:48.784 CC module/bdev/gpt/gpt.o 00:03:48.784 SYMLINK libspdk_fsdev_aio.so 00:03:48.784 CC module/bdev/error/vbdev_error_rpc.o 00:03:48.784 CC module/bdev/malloc/bdev_malloc.o 00:03:48.784 LIB libspdk_accel_dsa.a 00:03:48.784 CC module/bdev/lvol/vbdev_lvol.o 00:03:48.784 SO libspdk_accel_dsa.so.5.0 00:03:48.784 SYMLINK libspdk_sock_posix.so 00:03:48.784 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:48.784 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:48.784 SYMLINK libspdk_accel_dsa.so 00:03:48.784 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:48.784 CC module/bdev/gpt/vbdev_gpt.o 00:03:49.043 LIB libspdk_bdev_error.a 00:03:49.043 CC module/bdev/null/bdev_null.o 00:03:49.043 SO libspdk_bdev_error.so.6.0 00:03:49.043 LIB libspdk_bdev_delay.a 00:03:49.043 LIB libspdk_blobfs_bdev.a 00:03:49.043 SO libspdk_bdev_delay.so.6.0 00:03:49.043 SO libspdk_blobfs_bdev.so.6.0 00:03:49.043 SYMLINK libspdk_bdev_error.so 00:03:49.043 LIB libspdk_bdev_malloc.a 00:03:49.043 CC module/bdev/nvme/bdev_nvme.o 00:03:49.043 SO libspdk_bdev_malloc.so.6.0 00:03:49.043 SYMLINK libspdk_bdev_delay.so 00:03:49.043 SYMLINK libspdk_blobfs_bdev.so 00:03:49.043 CC module/bdev/null/bdev_null_rpc.o 00:03:49.043 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:49.043 SYMLINK libspdk_bdev_malloc.so 00:03:49.043 CC module/bdev/passthru/vbdev_passthru.o 00:03:49.043 LIB libspdk_bdev_gpt.a 00:03:49.043 CC module/bdev/raid/bdev_raid.o 00:03:49.043 LIB libspdk_bdev_lvol.a 00:03:49.043 SO libspdk_bdev_gpt.so.6.0 00:03:49.300 SO libspdk_bdev_lvol.so.6.0 00:03:49.300 CC module/bdev/raid/bdev_raid_rpc.o 00:03:49.300 LIB libspdk_bdev_null.a 00:03:49.300 SYMLINK libspdk_bdev_gpt.so 00:03:49.300 CC module/bdev/split/vbdev_split.o 00:03:49.300 CC module/bdev/raid/bdev_raid_sb.o 00:03:49.300 SYMLINK libspdk_bdev_lvol.so 00:03:49.300 CC module/bdev/split/vbdev_split_rpc.o 00:03:49.300 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:49.300 SO libspdk_bdev_null.so.6.0 00:03:49.300 SYMLINK libspdk_bdev_null.so 00:03:49.300 CC module/bdev/nvme/nvme_rpc.o 00:03:49.300 CC module/bdev/nvme/bdev_mdns_client.o 00:03:49.300 CC module/bdev/nvme/vbdev_opal.o 00:03:49.300 LIB libspdk_bdev_split.a 00:03:49.300 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:49.300 SO libspdk_bdev_split.so.6.0 00:03:49.557 SYMLINK libspdk_bdev_split.so 00:03:49.557 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:49.557 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:49.557 CC module/bdev/raid/raid0.o 00:03:49.557 CC module/bdev/raid/raid1.o 00:03:49.557 LIB libspdk_bdev_passthru.a 00:03:49.557 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:49.557 SO libspdk_bdev_passthru.so.6.0 00:03:49.557 SYMLINK libspdk_bdev_passthru.so 00:03:49.557 LIB libspdk_bdev_zone_block.a 00:03:49.816 CC module/bdev/raid/concat.o 00:03:49.816 SO libspdk_bdev_zone_block.so.6.0 00:03:49.816 CC module/bdev/xnvme/bdev_xnvme.o 00:03:49.816 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:49.816 CC module/bdev/aio/bdev_aio.o 00:03:49.816 CC module/bdev/ftl/bdev_ftl.o 00:03:49.816 SYMLINK libspdk_bdev_zone_block.so 00:03:49.816 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:49.816 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:49.816 CC module/bdev/iscsi/bdev_iscsi.o 00:03:49.816 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:49.816 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:50.074 LIB libspdk_bdev_raid.a 00:03:50.074 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:50.074 LIB libspdk_bdev_xnvme.a 00:03:50.074 SO libspdk_bdev_raid.so.6.0 00:03:50.074 CC module/bdev/aio/bdev_aio_rpc.o 00:03:50.074 SO libspdk_bdev_xnvme.so.3.0 00:03:50.074 LIB libspdk_bdev_ftl.a 00:03:50.074 SYMLINK libspdk_bdev_xnvme.so 00:03:50.074 SYMLINK libspdk_bdev_raid.so 00:03:50.074 SO libspdk_bdev_ftl.so.6.0 00:03:50.074 SYMLINK libspdk_bdev_ftl.so 00:03:50.074 LIB libspdk_bdev_aio.a 00:03:50.074 LIB libspdk_bdev_iscsi.a 00:03:50.074 SO libspdk_bdev_aio.so.6.0 00:03:50.074 SO libspdk_bdev_iscsi.so.6.0 00:03:50.333 SYMLINK libspdk_bdev_aio.so 00:03:50.333 LIB libspdk_bdev_virtio.a 00:03:50.333 SYMLINK libspdk_bdev_iscsi.so 00:03:50.333 SO libspdk_bdev_virtio.so.6.0 00:03:50.333 SYMLINK libspdk_bdev_virtio.so 00:03:51.710 LIB libspdk_bdev_nvme.a 00:03:51.710 SO libspdk_bdev_nvme.so.7.1 00:03:51.710 SYMLINK libspdk_bdev_nvme.so 00:03:51.969 CC module/event/subsystems/fsdev/fsdev.o 00:03:51.969 CC module/event/subsystems/scheduler/scheduler.o 00:03:51.969 CC module/event/subsystems/sock/sock.o 00:03:51.969 CC module/event/subsystems/iobuf/iobuf.o 00:03:51.969 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:51.969 CC module/event/subsystems/keyring/keyring.o 00:03:51.969 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:51.969 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:51.969 CC module/event/subsystems/vmd/vmd.o 00:03:51.969 LIB libspdk_event_keyring.a 00:03:51.969 LIB libspdk_event_fsdev.a 00:03:51.969 SO libspdk_event_keyring.so.1.0 00:03:51.969 LIB libspdk_event_scheduler.a 00:03:51.969 SO libspdk_event_fsdev.so.1.0 00:03:51.969 LIB libspdk_event_vhost_blk.a 00:03:51.969 LIB libspdk_event_iobuf.a 00:03:51.969 SO libspdk_event_scheduler.so.4.0 00:03:51.969 LIB libspdk_event_sock.a 00:03:51.969 LIB libspdk_event_vmd.a 00:03:51.969 SO libspdk_event_vhost_blk.so.3.0 00:03:51.969 SYMLINK libspdk_event_keyring.so 00:03:51.969 SO libspdk_event_sock.so.5.0 00:03:52.228 SO libspdk_event_iobuf.so.3.0 00:03:52.228 SYMLINK libspdk_event_fsdev.so 00:03:52.228 SO libspdk_event_vmd.so.6.0 00:03:52.228 SYMLINK libspdk_event_scheduler.so 00:03:52.228 SYMLINK libspdk_event_vhost_blk.so 00:03:52.228 SYMLINK libspdk_event_sock.so 00:03:52.228 SYMLINK libspdk_event_vmd.so 00:03:52.228 SYMLINK libspdk_event_iobuf.so 00:03:52.484 CC module/event/subsystems/accel/accel.o 00:03:52.484 LIB libspdk_event_accel.a 00:03:52.484 SO libspdk_event_accel.so.6.0 00:03:52.484 SYMLINK libspdk_event_accel.so 00:03:52.741 CC module/event/subsystems/bdev/bdev.o 00:03:52.998 LIB libspdk_event_bdev.a 00:03:52.998 SO libspdk_event_bdev.so.6.0 00:03:52.998 SYMLINK libspdk_event_bdev.so 00:03:53.256 CC module/event/subsystems/scsi/scsi.o 00:03:53.256 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:53.256 CC module/event/subsystems/ublk/ublk.o 00:03:53.256 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:53.256 CC module/event/subsystems/nbd/nbd.o 00:03:53.256 LIB libspdk_event_scsi.a 00:03:53.256 LIB libspdk_event_ublk.a 00:03:53.256 LIB libspdk_event_nbd.a 00:03:53.256 SO libspdk_event_scsi.so.6.0 00:03:53.256 SO libspdk_event_ublk.so.3.0 00:03:53.256 SO libspdk_event_nbd.so.6.0 00:03:53.256 SYMLINK libspdk_event_scsi.so 00:03:53.256 SYMLINK libspdk_event_ublk.so 00:03:53.256 SYMLINK libspdk_event_nbd.so 00:03:53.514 LIB libspdk_event_nvmf.a 00:03:53.514 SO libspdk_event_nvmf.so.6.0 00:03:53.514 SYMLINK libspdk_event_nvmf.so 00:03:53.514 CC module/event/subsystems/iscsi/iscsi.o 00:03:53.514 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:53.773 LIB libspdk_event_vhost_scsi.a 00:03:53.773 LIB libspdk_event_iscsi.a 00:03:53.773 SO libspdk_event_vhost_scsi.so.3.0 00:03:53.773 SO libspdk_event_iscsi.so.6.0 00:03:53.773 SYMLINK libspdk_event_vhost_scsi.so 00:03:53.773 SYMLINK libspdk_event_iscsi.so 00:03:53.773 SO libspdk.so.6.0 00:03:53.773 SYMLINK libspdk.so 00:03:54.032 CC app/trace_record/trace_record.o 00:03:54.032 CC app/spdk_lspci/spdk_lspci.o 00:03:54.032 CC app/spdk_nvme_identify/identify.o 00:03:54.032 CXX app/trace/trace.o 00:03:54.032 CC app/spdk_nvme_perf/perf.o 00:03:54.032 CC app/nvmf_tgt/nvmf_main.o 00:03:54.032 CC app/iscsi_tgt/iscsi_tgt.o 00:03:54.032 CC app/spdk_tgt/spdk_tgt.o 00:03:54.032 CC examples/util/zipf/zipf.o 00:03:54.032 CC test/thread/poller_perf/poller_perf.o 00:03:54.032 LINK spdk_lspci 00:03:54.291 LINK nvmf_tgt 00:03:54.291 LINK spdk_trace_record 00:03:54.291 LINK zipf 00:03:54.291 LINK iscsi_tgt 00:03:54.291 LINK poller_perf 00:03:54.291 LINK spdk_trace 00:03:54.291 LINK spdk_tgt 00:03:54.291 CC app/spdk_nvme_discover/discovery_aer.o 00:03:54.549 CC test/dma/test_dma/test_dma.o 00:03:54.549 TEST_HEADER include/spdk/accel.h 00:03:54.549 TEST_HEADER include/spdk/accel_module.h 00:03:54.549 TEST_HEADER include/spdk/assert.h 00:03:54.549 TEST_HEADER include/spdk/barrier.h 00:03:54.549 TEST_HEADER include/spdk/base64.h 00:03:54.549 TEST_HEADER include/spdk/bdev.h 00:03:54.549 TEST_HEADER include/spdk/bdev_module.h 00:03:54.549 TEST_HEADER include/spdk/bdev_zone.h 00:03:54.549 TEST_HEADER include/spdk/bit_array.h 00:03:54.549 TEST_HEADER include/spdk/bit_pool.h 00:03:54.549 TEST_HEADER include/spdk/blob_bdev.h 00:03:54.549 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:54.549 TEST_HEADER include/spdk/blobfs.h 00:03:54.549 TEST_HEADER include/spdk/blob.h 00:03:54.549 TEST_HEADER include/spdk/conf.h 00:03:54.549 TEST_HEADER include/spdk/config.h 00:03:54.549 TEST_HEADER include/spdk/cpuset.h 00:03:54.549 TEST_HEADER include/spdk/crc16.h 00:03:54.549 TEST_HEADER include/spdk/crc32.h 00:03:54.549 TEST_HEADER include/spdk/crc64.h 00:03:54.549 TEST_HEADER include/spdk/dif.h 00:03:54.549 TEST_HEADER include/spdk/dma.h 00:03:54.549 TEST_HEADER include/spdk/endian.h 00:03:54.549 TEST_HEADER include/spdk/env_dpdk.h 00:03:54.549 TEST_HEADER include/spdk/env.h 00:03:54.549 TEST_HEADER include/spdk/event.h 00:03:54.549 TEST_HEADER include/spdk/fd_group.h 00:03:54.549 TEST_HEADER include/spdk/fd.h 00:03:54.549 TEST_HEADER include/spdk/file.h 00:03:54.549 TEST_HEADER include/spdk/fsdev.h 00:03:54.549 CC examples/ioat/perf/perf.o 00:03:54.549 TEST_HEADER include/spdk/fsdev_module.h 00:03:54.549 TEST_HEADER include/spdk/ftl.h 00:03:54.549 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:54.549 TEST_HEADER include/spdk/gpt_spec.h 00:03:54.549 TEST_HEADER include/spdk/hexlify.h 00:03:54.549 TEST_HEADER include/spdk/histogram_data.h 00:03:54.549 TEST_HEADER include/spdk/idxd.h 00:03:54.549 TEST_HEADER include/spdk/idxd_spec.h 00:03:54.549 TEST_HEADER include/spdk/init.h 00:03:54.549 TEST_HEADER include/spdk/ioat.h 00:03:54.549 TEST_HEADER include/spdk/ioat_spec.h 00:03:54.549 TEST_HEADER include/spdk/iscsi_spec.h 00:03:54.549 TEST_HEADER include/spdk/json.h 00:03:54.549 TEST_HEADER include/spdk/jsonrpc.h 00:03:54.549 CC test/app/bdev_svc/bdev_svc.o 00:03:54.549 TEST_HEADER include/spdk/keyring.h 00:03:54.549 TEST_HEADER include/spdk/keyring_module.h 00:03:54.549 TEST_HEADER include/spdk/likely.h 00:03:54.549 TEST_HEADER include/spdk/log.h 00:03:54.549 CC test/event/event_perf/event_perf.o 00:03:54.549 TEST_HEADER include/spdk/lvol.h 00:03:54.549 TEST_HEADER include/spdk/md5.h 00:03:54.549 TEST_HEADER include/spdk/memory.h 00:03:54.549 TEST_HEADER include/spdk/mmio.h 00:03:54.549 TEST_HEADER include/spdk/nbd.h 00:03:54.549 TEST_HEADER include/spdk/net.h 00:03:54.549 TEST_HEADER include/spdk/notify.h 00:03:54.549 TEST_HEADER include/spdk/nvme.h 00:03:54.549 TEST_HEADER include/spdk/nvme_intel.h 00:03:54.549 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:54.549 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:54.549 TEST_HEADER include/spdk/nvme_spec.h 00:03:54.549 TEST_HEADER include/spdk/nvme_zns.h 00:03:54.549 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:54.549 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:54.549 TEST_HEADER include/spdk/nvmf.h 00:03:54.549 TEST_HEADER include/spdk/nvmf_spec.h 00:03:54.549 TEST_HEADER include/spdk/nvmf_transport.h 00:03:54.549 TEST_HEADER include/spdk/opal.h 00:03:54.549 TEST_HEADER include/spdk/opal_spec.h 00:03:54.549 TEST_HEADER include/spdk/pci_ids.h 00:03:54.549 TEST_HEADER include/spdk/pipe.h 00:03:54.549 TEST_HEADER include/spdk/queue.h 00:03:54.549 TEST_HEADER include/spdk/reduce.h 00:03:54.549 TEST_HEADER include/spdk/rpc.h 00:03:54.549 CC test/env/mem_callbacks/mem_callbacks.o 00:03:54.549 TEST_HEADER include/spdk/scheduler.h 00:03:54.549 TEST_HEADER include/spdk/scsi.h 00:03:54.549 TEST_HEADER include/spdk/scsi_spec.h 00:03:54.549 LINK spdk_nvme_discover 00:03:54.549 TEST_HEADER include/spdk/sock.h 00:03:54.549 TEST_HEADER include/spdk/stdinc.h 00:03:54.550 TEST_HEADER include/spdk/string.h 00:03:54.550 TEST_HEADER include/spdk/thread.h 00:03:54.550 TEST_HEADER include/spdk/trace.h 00:03:54.550 TEST_HEADER include/spdk/trace_parser.h 00:03:54.550 TEST_HEADER include/spdk/tree.h 00:03:54.550 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:54.550 TEST_HEADER include/spdk/ublk.h 00:03:54.550 TEST_HEADER include/spdk/util.h 00:03:54.550 TEST_HEADER include/spdk/uuid.h 00:03:54.550 TEST_HEADER include/spdk/version.h 00:03:54.550 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:54.550 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:54.550 TEST_HEADER include/spdk/vhost.h 00:03:54.550 TEST_HEADER include/spdk/vmd.h 00:03:54.550 TEST_HEADER include/spdk/xor.h 00:03:54.550 TEST_HEADER include/spdk/zipf.h 00:03:54.550 CXX test/cpp_headers/accel.o 00:03:54.550 LINK bdev_svc 00:03:54.808 LINK event_perf 00:03:54.808 LINK ioat_perf 00:03:54.808 LINK mem_callbacks 00:03:54.808 CXX test/cpp_headers/accel_module.o 00:03:54.808 CC test/app/histogram_perf/histogram_perf.o 00:03:54.808 LINK spdk_nvme_identify 00:03:54.808 CC test/event/reactor/reactor.o 00:03:54.808 LINK spdk_nvme_perf 00:03:54.808 LINK test_dma 00:03:54.808 CC test/env/vtophys/vtophys.o 00:03:54.808 CC test/event/reactor_perf/reactor_perf.o 00:03:54.808 CC examples/ioat/verify/verify.o 00:03:55.066 CXX test/cpp_headers/assert.o 00:03:55.066 LINK histogram_perf 00:03:55.066 LINK nvme_fuzz 00:03:55.066 LINK reactor 00:03:55.066 CC test/app/jsoncat/jsoncat.o 00:03:55.066 LINK reactor_perf 00:03:55.066 LINK vtophys 00:03:55.066 CXX test/cpp_headers/barrier.o 00:03:55.066 CC app/spdk_top/spdk_top.o 00:03:55.066 CXX test/cpp_headers/base64.o 00:03:55.066 CXX test/cpp_headers/bdev.o 00:03:55.066 LINK jsoncat 00:03:55.066 LINK verify 00:03:55.325 CC examples/vmd/lsvmd/lsvmd.o 00:03:55.325 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:55.325 CXX test/cpp_headers/bdev_module.o 00:03:55.325 CC test/event/app_repeat/app_repeat.o 00:03:55.325 CXX test/cpp_headers/bdev_zone.o 00:03:55.325 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:55.325 CC examples/vmd/led/led.o 00:03:55.325 CC test/env/memory/memory_ut.o 00:03:55.325 CC test/env/pci/pci_ut.o 00:03:55.325 LINK lsvmd 00:03:55.325 LINK env_dpdk_post_init 00:03:55.325 LINK led 00:03:55.325 LINK app_repeat 00:03:55.325 CXX test/cpp_headers/bit_array.o 00:03:55.583 CC app/vhost/vhost.o 00:03:55.583 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:55.583 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:55.583 CXX test/cpp_headers/bit_pool.o 00:03:55.583 LINK pci_ut 00:03:55.583 CC test/event/scheduler/scheduler.o 00:03:55.583 CXX test/cpp_headers/blob_bdev.o 00:03:55.583 CC examples/idxd/perf/perf.o 00:03:55.583 LINK vhost 00:03:55.842 CC test/app/stub/stub.o 00:03:55.842 CXX test/cpp_headers/blobfs_bdev.o 00:03:55.842 CXX test/cpp_headers/blobfs.o 00:03:55.842 LINK scheduler 00:03:55.842 CXX test/cpp_headers/blob.o 00:03:55.842 LINK spdk_top 00:03:55.842 LINK stub 00:03:55.842 LINK vhost_fuzz 00:03:55.842 CXX test/cpp_headers/conf.o 00:03:55.842 CXX test/cpp_headers/config.o 00:03:56.100 LINK idxd_perf 00:03:56.100 CXX test/cpp_headers/cpuset.o 00:03:56.100 CC test/rpc_client/rpc_client_test.o 00:03:56.100 LINK memory_ut 00:03:56.100 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:56.100 CXX test/cpp_headers/crc16.o 00:03:56.100 CC app/spdk_dd/spdk_dd.o 00:03:56.100 CC examples/thread/thread/thread_ex.o 00:03:56.100 CC examples/sock/hello_world/hello_sock.o 00:03:56.100 CC app/fio/nvme/fio_plugin.o 00:03:56.100 CXX test/cpp_headers/crc32.o 00:03:56.100 LINK rpc_client_test 00:03:56.358 LINK interrupt_tgt 00:03:56.358 CC app/fio/bdev/fio_plugin.o 00:03:56.358 CXX test/cpp_headers/crc64.o 00:03:56.358 LINK thread 00:03:56.358 CC test/accel/dif/dif.o 00:03:56.358 LINK hello_sock 00:03:56.358 LINK spdk_dd 00:03:56.358 CXX test/cpp_headers/dif.o 00:03:56.358 CXX test/cpp_headers/dma.o 00:03:56.616 CC test/blobfs/mkfs/mkfs.o 00:03:56.616 LINK iscsi_fuzz 00:03:56.616 CC test/lvol/esnap/esnap.o 00:03:56.616 LINK spdk_nvme 00:03:56.616 CXX test/cpp_headers/endian.o 00:03:56.616 CXX test/cpp_headers/env_dpdk.o 00:03:56.616 LINK mkfs 00:03:56.616 CC examples/accel/perf/accel_perf.o 00:03:56.616 CXX test/cpp_headers/env.o 00:03:56.616 LINK spdk_bdev 00:03:56.874 CC examples/blob/hello_world/hello_blob.o 00:03:56.874 CXX test/cpp_headers/event.o 00:03:56.874 CC examples/blob/cli/blobcli.o 00:03:56.874 CC test/nvme/aer/aer.o 00:03:56.874 CXX test/cpp_headers/fd_group.o 00:03:56.874 CXX test/cpp_headers/fd.o 00:03:56.874 LINK dif 00:03:56.874 CXX test/cpp_headers/file.o 00:03:56.874 CXX test/cpp_headers/fsdev.o 00:03:57.133 CC examples/nvme/hello_world/hello_world.o 00:03:57.133 LINK hello_blob 00:03:57.133 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:57.133 LINK aer 00:03:57.133 CC test/nvme/reset/reset.o 00:03:57.133 CXX test/cpp_headers/fsdev_module.o 00:03:57.133 CC test/nvme/sgl/sgl.o 00:03:57.133 CXX test/cpp_headers/ftl.o 00:03:57.133 LINK accel_perf 00:03:57.133 LINK hello_world 00:03:57.133 LINK hello_fsdev 00:03:57.391 LINK blobcli 00:03:57.391 CXX test/cpp_headers/fuse_dispatcher.o 00:03:57.391 LINK reset 00:03:57.391 CC examples/nvme/reconnect/reconnect.o 00:03:57.391 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:57.391 LINK sgl 00:03:57.391 CC examples/nvme/arbitration/arbitration.o 00:03:57.391 CXX test/cpp_headers/gpt_spec.o 00:03:57.391 CC examples/nvme/hotplug/hotplug.o 00:03:57.391 CC examples/bdev/hello_world/hello_bdev.o 00:03:57.391 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:57.391 CXX test/cpp_headers/hexlify.o 00:03:57.650 CC examples/bdev/bdevperf/bdevperf.o 00:03:57.650 CC test/nvme/e2edp/nvme_dp.o 00:03:57.650 CXX test/cpp_headers/histogram_data.o 00:03:57.650 LINK cmb_copy 00:03:57.650 LINK hello_bdev 00:03:57.650 LINK hotplug 00:03:57.650 LINK reconnect 00:03:57.650 LINK arbitration 00:03:57.650 CXX test/cpp_headers/idxd.o 00:03:57.650 CXX test/cpp_headers/idxd_spec.o 00:03:57.908 CXX test/cpp_headers/init.o 00:03:57.908 CXX test/cpp_headers/ioat.o 00:03:57.908 LINK nvme_dp 00:03:57.908 LINK nvme_manage 00:03:57.908 CXX test/cpp_headers/ioat_spec.o 00:03:57.908 CC examples/nvme/abort/abort.o 00:03:57.908 CXX test/cpp_headers/iscsi_spec.o 00:03:57.908 CXX test/cpp_headers/json.o 00:03:57.908 CXX test/cpp_headers/jsonrpc.o 00:03:57.908 CC test/bdev/bdevio/bdevio.o 00:03:57.908 CXX test/cpp_headers/keyring.o 00:03:57.908 CC test/nvme/overhead/overhead.o 00:03:57.908 CXX test/cpp_headers/keyring_module.o 00:03:58.169 CC test/nvme/err_injection/err_injection.o 00:03:58.169 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:58.169 CXX test/cpp_headers/likely.o 00:03:58.169 CXX test/cpp_headers/log.o 00:03:58.169 LINK err_injection 00:03:58.169 LINK bdevperf 00:03:58.169 CC test/nvme/startup/startup.o 00:03:58.169 LINK pmr_persistence 00:03:58.169 LINK overhead 00:03:58.169 LINK abort 00:03:58.169 CXX test/cpp_headers/lvol.o 00:03:58.169 CC test/nvme/reserve/reserve.o 00:03:58.429 LINK bdevio 00:03:58.429 CXX test/cpp_headers/md5.o 00:03:58.429 LINK startup 00:03:58.429 CXX test/cpp_headers/memory.o 00:03:58.429 CC test/nvme/simple_copy/simple_copy.o 00:03:58.429 CC test/nvme/connect_stress/connect_stress.o 00:03:58.429 CC test/nvme/boot_partition/boot_partition.o 00:03:58.429 LINK reserve 00:03:58.429 CXX test/cpp_headers/mmio.o 00:03:58.429 CXX test/cpp_headers/nbd.o 00:03:58.429 CC test/nvme/compliance/nvme_compliance.o 00:03:58.429 CC test/nvme/fused_ordering/fused_ordering.o 00:03:58.429 CXX test/cpp_headers/net.o 00:03:58.688 CC examples/nvmf/nvmf/nvmf.o 00:03:58.688 LINK connect_stress 00:03:58.688 LINK boot_partition 00:03:58.688 LINK simple_copy 00:03:58.688 CXX test/cpp_headers/notify.o 00:03:58.688 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:58.688 CXX test/cpp_headers/nvme.o 00:03:58.688 CXX test/cpp_headers/nvme_intel.o 00:03:58.688 LINK fused_ordering 00:03:58.688 CXX test/cpp_headers/nvme_ocssd.o 00:03:58.688 CC test/nvme/fdp/fdp.o 00:03:58.688 CC test/nvme/cuse/cuse.o 00:03:58.688 LINK nvme_compliance 00:03:58.688 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:58.688 LINK doorbell_aers 00:03:58.947 CXX test/cpp_headers/nvme_spec.o 00:03:58.947 CXX test/cpp_headers/nvme_zns.o 00:03:58.947 LINK nvmf 00:03:58.947 CXX test/cpp_headers/nvmf_cmd.o 00:03:58.947 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:58.947 CXX test/cpp_headers/nvmf.o 00:03:58.947 CXX test/cpp_headers/nvmf_spec.o 00:03:58.947 CXX test/cpp_headers/nvmf_transport.o 00:03:58.947 CXX test/cpp_headers/opal.o 00:03:58.947 CXX test/cpp_headers/opal_spec.o 00:03:58.947 LINK fdp 00:03:58.947 CXX test/cpp_headers/pci_ids.o 00:03:58.947 CXX test/cpp_headers/pipe.o 00:03:58.947 CXX test/cpp_headers/queue.o 00:03:58.947 CXX test/cpp_headers/reduce.o 00:03:58.947 CXX test/cpp_headers/rpc.o 00:03:59.205 CXX test/cpp_headers/scheduler.o 00:03:59.205 CXX test/cpp_headers/scsi.o 00:03:59.205 CXX test/cpp_headers/scsi_spec.o 00:03:59.205 CXX test/cpp_headers/sock.o 00:03:59.205 CXX test/cpp_headers/stdinc.o 00:03:59.205 CXX test/cpp_headers/string.o 00:03:59.205 CXX test/cpp_headers/thread.o 00:03:59.205 CXX test/cpp_headers/trace.o 00:03:59.205 CXX test/cpp_headers/trace_parser.o 00:03:59.205 CXX test/cpp_headers/tree.o 00:03:59.205 CXX test/cpp_headers/ublk.o 00:03:59.205 CXX test/cpp_headers/util.o 00:03:59.205 CXX test/cpp_headers/uuid.o 00:03:59.205 CXX test/cpp_headers/version.o 00:03:59.205 CXX test/cpp_headers/vfio_user_pci.o 00:03:59.205 CXX test/cpp_headers/vfio_user_spec.o 00:03:59.205 CXX test/cpp_headers/vhost.o 00:03:59.205 CXX test/cpp_headers/vmd.o 00:03:59.206 CXX test/cpp_headers/xor.o 00:03:59.463 CXX test/cpp_headers/zipf.o 00:03:59.721 LINK cuse 00:04:01.719 LINK esnap 00:04:01.719 ************************************ 00:04:01.719 END TEST make 00:04:01.719 ************************************ 00:04:01.719 00:04:01.719 real 1m0.864s 00:04:01.719 user 5m3.437s 00:04:01.719 sys 0m49.354s 00:04:01.719 02:51:17 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:01.719 02:51:17 make -- common/autotest_common.sh@10 -- $ set +x 00:04:01.719 02:51:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:01.719 02:51:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:01.719 02:51:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:01.719 02:51:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:01.719 02:51:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:01.719 02:51:17 -- pm/common@44 -- $ pid=5811 00:04:01.719 02:51:17 -- pm/common@50 -- $ kill -TERM 5811 00:04:01.719 02:51:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:01.719 02:51:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:01.719 02:51:17 -- pm/common@44 -- $ pid=5813 00:04:01.719 02:51:17 -- pm/common@50 -- $ kill -TERM 5813 00:04:01.719 02:51:17 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:01.719 02:51:17 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:01.719 02:51:17 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:01.719 02:51:17 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:01.719 02:51:17 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:01.719 02:51:17 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:01.719 02:51:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.719 02:51:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.719 02:51:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.719 02:51:17 -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.719 02:51:17 -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.719 02:51:17 -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.719 02:51:17 -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.719 02:51:17 -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.719 02:51:17 -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.719 02:51:17 -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.719 02:51:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.719 02:51:17 -- scripts/common.sh@344 -- # case "$op" in 00:04:01.719 02:51:17 -- scripts/common.sh@345 -- # : 1 00:04:01.719 02:51:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.719 02:51:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.719 02:51:17 -- scripts/common.sh@365 -- # decimal 1 00:04:01.719 02:51:17 -- scripts/common.sh@353 -- # local d=1 00:04:01.719 02:51:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.719 02:51:17 -- scripts/common.sh@355 -- # echo 1 00:04:01.719 02:51:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.719 02:51:17 -- scripts/common.sh@366 -- # decimal 2 00:04:01.719 02:51:17 -- scripts/common.sh@353 -- # local d=2 00:04:01.719 02:51:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.719 02:51:17 -- scripts/common.sh@355 -- # echo 2 00:04:01.719 02:51:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.719 02:51:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.719 02:51:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.719 02:51:17 -- scripts/common.sh@368 -- # return 0 00:04:01.719 02:51:17 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.719 02:51:17 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:01.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.719 --rc genhtml_branch_coverage=1 00:04:01.719 --rc genhtml_function_coverage=1 00:04:01.719 --rc genhtml_legend=1 00:04:01.719 --rc geninfo_all_blocks=1 00:04:01.719 --rc geninfo_unexecuted_blocks=1 00:04:01.719 00:04:01.719 ' 00:04:01.719 02:51:17 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:01.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.720 --rc genhtml_branch_coverage=1 00:04:01.720 --rc genhtml_function_coverage=1 00:04:01.720 --rc genhtml_legend=1 00:04:01.720 --rc geninfo_all_blocks=1 00:04:01.720 --rc geninfo_unexecuted_blocks=1 00:04:01.720 00:04:01.720 ' 00:04:01.720 02:51:17 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:01.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.720 --rc genhtml_branch_coverage=1 00:04:01.720 --rc genhtml_function_coverage=1 00:04:01.720 --rc genhtml_legend=1 00:04:01.720 --rc geninfo_all_blocks=1 00:04:01.720 --rc geninfo_unexecuted_blocks=1 00:04:01.720 00:04:01.720 ' 00:04:01.720 02:51:17 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:01.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.720 --rc genhtml_branch_coverage=1 00:04:01.720 --rc genhtml_function_coverage=1 00:04:01.720 --rc genhtml_legend=1 00:04:01.720 --rc geninfo_all_blocks=1 00:04:01.720 --rc geninfo_unexecuted_blocks=1 00:04:01.720 00:04:01.720 ' 00:04:01.720 02:51:17 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:01.720 02:51:17 -- nvmf/common.sh@7 -- # uname -s 00:04:01.720 02:51:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:01.720 02:51:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:01.720 02:51:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:01.720 02:51:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:01.720 02:51:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:01.720 02:51:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:01.720 02:51:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:01.720 02:51:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:01.720 02:51:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:01.720 02:51:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:01.720 02:51:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e9fb4c6f-1640-4e65-b787-fce1b76805ec 00:04:01.720 02:51:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=e9fb4c6f-1640-4e65-b787-fce1b76805ec 00:04:01.720 02:51:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:01.720 02:51:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:01.720 02:51:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:01.720 02:51:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:01.720 02:51:17 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:01.720 02:51:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:01.720 02:51:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:01.720 02:51:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:01.720 02:51:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:01.720 02:51:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.720 02:51:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.720 02:51:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.720 02:51:17 -- paths/export.sh@5 -- # export PATH 00:04:01.720 02:51:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.720 02:51:17 -- nvmf/common.sh@51 -- # : 0 00:04:01.720 02:51:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:01.720 02:51:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:01.720 02:51:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:01.720 02:51:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:01.720 02:51:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:01.720 02:51:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:01.720 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:01.720 02:51:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:01.720 02:51:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:01.720 02:51:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:01.720 02:51:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:01.720 02:51:17 -- spdk/autotest.sh@32 -- # uname -s 00:04:01.720 02:51:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:01.720 02:51:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:01.720 02:51:17 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:01.720 02:51:17 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:01.720 02:51:17 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:01.720 02:51:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:01.720 02:51:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:01.720 02:51:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:01.720 02:51:17 -- spdk/autotest.sh@48 -- # udevadm_pid=66212 00:04:01.720 02:51:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:01.720 02:51:17 -- pm/common@17 -- # local monitor 00:04:01.720 02:51:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:01.720 02:51:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:01.720 02:51:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:01.720 02:51:17 -- pm/common@25 -- # sleep 1 00:04:01.720 02:51:17 -- pm/common@21 -- # date +%s 00:04:01.720 02:51:17 -- pm/common@21 -- # date +%s 00:04:01.720 02:51:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732848677 00:04:01.720 02:51:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732848677 00:04:01.720 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732848677_collect-cpu-load.pm.log 00:04:01.720 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732848677_collect-vmstat.pm.log 00:04:03.095 02:51:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:03.095 02:51:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:03.095 02:51:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.095 02:51:18 -- common/autotest_common.sh@10 -- # set +x 00:04:03.095 02:51:18 -- spdk/autotest.sh@59 -- # create_test_list 00:04:03.095 02:51:18 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:03.095 02:51:18 -- common/autotest_common.sh@10 -- # set +x 00:04:03.095 02:51:18 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:03.095 02:51:18 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:03.095 02:51:18 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:03.095 02:51:18 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:03.095 02:51:18 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:03.095 02:51:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:03.095 02:51:18 -- common/autotest_common.sh@1457 -- # uname 00:04:03.095 02:51:18 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:03.095 02:51:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:03.095 02:51:18 -- common/autotest_common.sh@1477 -- # uname 00:04:03.095 02:51:18 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:03.095 02:51:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:03.095 02:51:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:03.095 lcov: LCOV version 1.15 00:04:03.095 02:51:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:17.971 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:17.971 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:30.177 02:51:44 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:30.178 02:51:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.178 02:51:44 -- common/autotest_common.sh@10 -- # set +x 00:04:30.178 02:51:44 -- spdk/autotest.sh@78 -- # rm -f 00:04:30.178 02:51:44 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:30.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.178 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:30.178 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:30.178 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:30.178 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:30.178 02:51:45 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:30.178 02:51:45 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:30.178 02:51:45 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:30.178 02:51:45 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:30.178 02:51:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.178 02:51:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:30.178 02:51:45 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:30.178 02:51:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:30.178 02:51:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.178 02:51:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.178 02:51:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:30.178 02:51:45 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:30.178 02:51:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:30.178 02:51:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.178 02:51:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.178 02:51:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:30.178 02:51:45 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:30.178 02:51:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:30.178 02:51:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.178 02:51:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.178 02:51:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:30.178 02:51:45 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:30.178 02:51:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:30.178 02:51:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.178 02:51:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.178 02:51:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:30.178 02:51:45 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:30.178 02:51:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:30.178 02:51:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.178 02:51:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.178 02:51:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:30.178 02:51:45 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:30.178 02:51:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:30.178 02:51:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.178 02:51:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.178 02:51:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:30.178 02:51:45 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:30.178 02:51:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:30.178 02:51:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.178 02:51:45 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:30.178 02:51:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.178 02:51:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:30.178 02:51:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:30.178 02:51:45 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:30.178 02:51:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:30.178 No valid GPT data, bailing 00:04:30.178 02:51:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:30.178 02:51:45 -- scripts/common.sh@394 -- # pt= 00:04:30.178 02:51:45 -- scripts/common.sh@395 -- # return 1 00:04:30.178 02:51:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:30.178 1+0 records in 00:04:30.178 1+0 records out 00:04:30.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108283 s, 96.8 MB/s 00:04:30.178 02:51:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.178 02:51:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:30.178 02:51:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:30.178 02:51:45 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:30.178 02:51:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:30.178 No valid GPT data, bailing 00:04:30.178 02:51:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:30.178 02:51:45 -- scripts/common.sh@394 -- # pt= 00:04:30.178 02:51:45 -- scripts/common.sh@395 -- # return 1 00:04:30.178 02:51:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:30.178 1+0 records in 00:04:30.178 1+0 records out 00:04:30.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00545871 s, 192 MB/s 00:04:30.178 02:51:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.178 02:51:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:30.178 02:51:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:30.178 02:51:45 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:30.178 02:51:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:30.178 No valid GPT data, bailing 00:04:30.178 02:51:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:30.178 02:51:45 -- scripts/common.sh@394 -- # pt= 00:04:30.178 02:51:45 -- scripts/common.sh@395 -- # return 1 00:04:30.178 02:51:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:30.178 1+0 records in 00:04:30.178 1+0 records out 00:04:30.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00411011 s, 255 MB/s 00:04:30.178 02:51:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.178 02:51:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:30.178 02:51:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:30.178 02:51:45 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:30.178 02:51:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:30.178 No valid GPT data, bailing 00:04:30.178 02:51:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:30.178 02:51:45 -- scripts/common.sh@394 -- # pt= 00:04:30.178 02:51:45 -- scripts/common.sh@395 -- # return 1 00:04:30.178 02:51:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:30.178 1+0 records in 00:04:30.178 1+0 records out 00:04:30.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491519 s, 213 MB/s 00:04:30.178 02:51:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.178 02:51:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:30.178 02:51:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:30.178 02:51:45 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:30.178 02:51:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:30.178 No valid GPT data, bailing 00:04:30.178 02:51:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:30.178 02:51:45 -- scripts/common.sh@394 -- # pt= 00:04:30.178 02:51:45 -- scripts/common.sh@395 -- # return 1 00:04:30.178 02:51:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:30.178 1+0 records in 00:04:30.178 1+0 records out 00:04:30.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00453266 s, 231 MB/s 00:04:30.178 02:51:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.178 02:51:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:30.178 02:51:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:30.178 02:51:45 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:30.178 02:51:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:30.178 No valid GPT data, bailing 00:04:30.179 02:51:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:30.179 02:51:45 -- scripts/common.sh@394 -- # pt= 00:04:30.179 02:51:45 -- scripts/common.sh@395 -- # return 1 00:04:30.179 02:51:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:30.179 1+0 records in 00:04:30.179 1+0 records out 00:04:30.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00558751 s, 188 MB/s 00:04:30.179 02:51:45 -- spdk/autotest.sh@105 -- # sync 00:04:30.750 02:51:46 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:30.750 02:51:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:30.750 02:51:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:32.130 02:51:48 -- spdk/autotest.sh@111 -- # uname -s 00:04:32.130 02:51:48 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:32.130 02:51:48 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:32.130 02:51:48 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:32.696 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.956 Hugepages 00:04:32.956 node hugesize free / total 00:04:32.956 node0 1048576kB 0 / 0 00:04:32.956 node0 2048kB 0 / 0 00:04:32.956 00:04:32.956 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:33.215 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:33.215 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:33.215 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:33.215 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:33.475 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:33.475 02:51:49 -- spdk/autotest.sh@117 -- # uname -s 00:04:33.475 02:51:49 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:33.475 02:51:49 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:33.475 02:51:49 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.301 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:34.301 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:34.301 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:34.301 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:34.560 02:51:50 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:35.495 02:51:51 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:35.495 02:51:51 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:35.495 02:51:51 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:35.495 02:51:51 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:35.495 02:51:51 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:35.495 02:51:51 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:35.495 02:51:51 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.495 02:51:51 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:35.495 02:51:51 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:35.495 02:51:51 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:35.495 02:51:51 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:35.495 02:51:51 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.753 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.010 Waiting for block devices as requested 00:04:36.010 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:36.010 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:36.010 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:36.010 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:41.284 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:41.284 02:51:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:41.284 02:51:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:41.284 02:51:57 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:41.284 02:51:57 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:41.284 02:51:57 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:41.284 02:51:57 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:41.284 02:51:57 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:41.284 02:51:57 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:41.284 02:51:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:41.284 02:51:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:41.284 02:51:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:41.284 02:51:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:41.284 02:51:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:41.284 02:51:57 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:41.284 02:51:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:41.284 02:51:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:41.284 02:51:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:41.284 02:51:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:41.284 02:51:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:41.284 02:51:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:41.284 02:51:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:41.284 02:51:57 -- common/autotest_common.sh@1543 -- # continue 00:04:41.284 02:51:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:41.284 02:51:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:41.284 02:51:57 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:41.284 02:51:57 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:41.284 02:51:57 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:41.284 02:51:57 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:41.284 02:51:57 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:41.284 02:51:57 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:41.284 02:51:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:41.284 02:51:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:41.284 02:51:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:41.284 02:51:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:41.284 02:51:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:41.284 02:51:57 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:41.285 02:51:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:41.285 02:51:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:41.285 02:51:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:41.285 02:51:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:41.285 02:51:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:41.285 02:51:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:41.285 02:51:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:41.285 02:51:57 -- common/autotest_common.sh@1543 -- # continue 00:04:41.285 02:51:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:41.285 02:51:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:41.285 02:51:57 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:41.285 02:51:57 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:41.285 02:51:57 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:41.285 02:51:57 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:41.285 02:51:57 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:41.285 02:51:57 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:41.285 02:51:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:41.285 02:51:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:41.285 02:51:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:41.285 02:51:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:41.285 02:51:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:41.285 02:51:57 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:41.285 02:51:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:41.285 02:51:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:41.285 02:51:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:41.285 02:51:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:41.285 02:51:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:41.285 02:51:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:41.285 02:51:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:41.285 02:51:57 -- common/autotest_common.sh@1543 -- # continue 00:04:41.285 02:51:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:41.285 02:51:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:41.285 02:51:57 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:41.285 02:51:57 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:41.285 02:51:57 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:41.285 02:51:57 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:41.285 02:51:57 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:41.285 02:51:57 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:41.285 02:51:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:41.285 02:51:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:41.285 02:51:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:41.285 02:51:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:41.285 02:51:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:41.285 02:51:57 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:41.285 02:51:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:41.285 02:51:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:41.285 02:51:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:41.285 02:51:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:41.285 02:51:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:41.285 02:51:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:41.285 02:51:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:41.285 02:51:57 -- common/autotest_common.sh@1543 -- # continue 00:04:41.285 02:51:57 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:41.285 02:51:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:41.285 02:51:57 -- common/autotest_common.sh@10 -- # set +x 00:04:41.285 02:51:57 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:41.285 02:51:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.285 02:51:57 -- common/autotest_common.sh@10 -- # set +x 00:04:41.285 02:51:57 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.853 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.113 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:42.113 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:42.113 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:42.375 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:42.375 02:51:58 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:42.375 02:51:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:42.375 02:51:58 -- common/autotest_common.sh@10 -- # set +x 00:04:42.375 02:51:58 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:42.375 02:51:58 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:42.375 02:51:58 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:42.375 02:51:58 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:42.375 02:51:58 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:42.375 02:51:58 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:42.375 02:51:58 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:42.375 02:51:58 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:42.375 02:51:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:42.375 02:51:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:42.375 02:51:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:42.375 02:51:58 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:42.375 02:51:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:42.375 02:51:58 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:42.375 02:51:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:42.375 02:51:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:42.375 02:51:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:42.375 02:51:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:42.375 02:51:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:42.375 02:51:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:42.375 02:51:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:42.375 02:51:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:42.375 02:51:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:42.375 02:51:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:42.375 02:51:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:42.375 02:51:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:42.375 02:51:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:42.375 02:51:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:42.375 02:51:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:42.375 02:51:58 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:42.375 02:51:58 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:42.375 02:51:58 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:42.375 02:51:58 -- common/autotest_common.sh@1572 -- # return 0 00:04:42.375 02:51:58 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:42.375 02:51:58 -- common/autotest_common.sh@1580 -- # return 0 00:04:42.375 02:51:58 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:42.375 02:51:58 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:42.375 02:51:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:42.375 02:51:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:42.375 02:51:58 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:42.375 02:51:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.375 02:51:58 -- common/autotest_common.sh@10 -- # set +x 00:04:42.375 02:51:58 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:42.375 02:51:58 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:42.375 02:51:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.375 02:51:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.375 02:51:58 -- common/autotest_common.sh@10 -- # set +x 00:04:42.375 ************************************ 00:04:42.375 START TEST env 00:04:42.375 ************************************ 00:04:42.375 02:51:58 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:42.637 * Looking for test storage... 00:04:42.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:42.637 02:51:58 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.637 02:51:58 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.637 02:51:58 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.637 02:51:58 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.637 02:51:58 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.637 02:51:58 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.637 02:51:58 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.637 02:51:58 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.637 02:51:58 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.637 02:51:58 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.637 02:51:58 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.637 02:51:58 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.637 02:51:58 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.637 02:51:58 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.637 02:51:58 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.637 02:51:58 env -- scripts/common.sh@344 -- # case "$op" in 00:04:42.637 02:51:58 env -- scripts/common.sh@345 -- # : 1 00:04:42.637 02:51:58 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.637 02:51:58 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.637 02:51:58 env -- scripts/common.sh@365 -- # decimal 1 00:04:42.637 02:51:58 env -- scripts/common.sh@353 -- # local d=1 00:04:42.637 02:51:58 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.637 02:51:58 env -- scripts/common.sh@355 -- # echo 1 00:04:42.637 02:51:58 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.637 02:51:58 env -- scripts/common.sh@366 -- # decimal 2 00:04:42.637 02:51:58 env -- scripts/common.sh@353 -- # local d=2 00:04:42.637 02:51:58 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.637 02:51:58 env -- scripts/common.sh@355 -- # echo 2 00:04:42.637 02:51:58 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.637 02:51:58 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.637 02:51:58 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.637 02:51:58 env -- scripts/common.sh@368 -- # return 0 00:04:42.637 02:51:58 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.637 02:51:58 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.637 --rc genhtml_branch_coverage=1 00:04:42.637 --rc genhtml_function_coverage=1 00:04:42.637 --rc genhtml_legend=1 00:04:42.637 --rc geninfo_all_blocks=1 00:04:42.637 --rc geninfo_unexecuted_blocks=1 00:04:42.637 00:04:42.637 ' 00:04:42.637 02:51:58 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.637 --rc genhtml_branch_coverage=1 00:04:42.637 --rc genhtml_function_coverage=1 00:04:42.637 --rc genhtml_legend=1 00:04:42.637 --rc geninfo_all_blocks=1 00:04:42.637 --rc geninfo_unexecuted_blocks=1 00:04:42.637 00:04:42.637 ' 00:04:42.637 02:51:58 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.637 --rc genhtml_branch_coverage=1 00:04:42.637 --rc genhtml_function_coverage=1 00:04:42.637 --rc genhtml_legend=1 00:04:42.637 --rc geninfo_all_blocks=1 00:04:42.637 --rc geninfo_unexecuted_blocks=1 00:04:42.637 00:04:42.637 ' 00:04:42.637 02:51:58 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.637 --rc genhtml_branch_coverage=1 00:04:42.637 --rc genhtml_function_coverage=1 00:04:42.637 --rc genhtml_legend=1 00:04:42.637 --rc geninfo_all_blocks=1 00:04:42.637 --rc geninfo_unexecuted_blocks=1 00:04:42.637 00:04:42.637 ' 00:04:42.638 02:51:58 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:42.638 02:51:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.638 02:51:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.638 02:51:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.638 ************************************ 00:04:42.638 START TEST env_memory 00:04:42.638 ************************************ 00:04:42.638 02:51:58 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:42.638 00:04:42.638 00:04:42.638 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.638 http://cunit.sourceforge.net/ 00:04:42.638 00:04:42.638 00:04:42.638 Suite: memory 00:04:42.638 Test: alloc and free memory map ...[2024-11-29 02:51:58.565043] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:42.638 passed 00:04:42.638 Test: mem map translation ...[2024-11-29 02:51:58.603614] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:42.638 [2024-11-29 02:51:58.603647] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:42.638 [2024-11-29 02:51:58.603704] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:42.638 [2024-11-29 02:51:58.603717] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:42.899 passed 00:04:42.899 Test: mem map registration ...[2024-11-29 02:51:58.671906] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:42.899 [2024-11-29 02:51:58.671954] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:42.899 passed 00:04:42.899 Test: mem map adjacent registrations ...passed 00:04:42.899 00:04:42.899 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.899 suites 1 1 n/a 0 0 00:04:42.899 tests 4 4 4 0 0 00:04:42.899 asserts 152 152 152 0 n/a 00:04:42.899 00:04:42.899 Elapsed time = 0.233 seconds 00:04:42.899 00:04:42.899 real 0m0.262s 00:04:42.899 user 0m0.234s 00:04:42.899 sys 0m0.020s 00:04:42.899 02:51:58 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.899 ************************************ 00:04:42.899 END TEST env_memory 00:04:42.899 ************************************ 00:04:42.899 02:51:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:42.899 02:51:58 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:42.899 02:51:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.899 02:51:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.899 02:51:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.899 ************************************ 00:04:42.899 START TEST env_vtophys 00:04:42.899 ************************************ 00:04:42.899 02:51:58 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:42.899 EAL: lib.eal log level changed from notice to debug 00:04:42.899 EAL: Detected lcore 0 as core 0 on socket 0 00:04:42.899 EAL: Detected lcore 1 as core 0 on socket 0 00:04:42.899 EAL: Detected lcore 2 as core 0 on socket 0 00:04:42.899 EAL: Detected lcore 3 as core 0 on socket 0 00:04:42.899 EAL: Detected lcore 4 as core 0 on socket 0 00:04:42.899 EAL: Detected lcore 5 as core 0 on socket 0 00:04:42.899 EAL: Detected lcore 6 as core 0 on socket 0 00:04:42.899 EAL: Detected lcore 7 as core 0 on socket 0 00:04:42.899 EAL: Detected lcore 8 as core 0 on socket 0 00:04:42.899 EAL: Detected lcore 9 as core 0 on socket 0 00:04:42.899 EAL: Maximum logical cores by configuration: 128 00:04:42.899 EAL: Detected CPU lcores: 10 00:04:42.899 EAL: Detected NUMA nodes: 1 00:04:42.899 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:42.899 EAL: Detected shared linkage of DPDK 00:04:42.899 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:42.899 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:42.899 EAL: Registered [vdev] bus. 00:04:42.899 EAL: bus.vdev log level changed from disabled to notice 00:04:42.899 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:42.899 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:42.899 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:42.899 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:42.899 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:42.899 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:42.899 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:42.899 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:42.899 EAL: No shared files mode enabled, IPC will be disabled 00:04:42.899 EAL: No shared files mode enabled, IPC is disabled 00:04:42.899 EAL: Selected IOVA mode 'PA' 00:04:42.899 EAL: Probing VFIO support... 00:04:42.899 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:42.899 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:42.899 EAL: Ask a virtual area of 0x2e000 bytes 00:04:42.899 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:42.899 EAL: Setting up physically contiguous memory... 00:04:42.899 EAL: Setting maximum number of open files to 524288 00:04:42.899 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:42.899 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:42.899 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.899 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:42.899 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.899 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.899 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:42.899 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:42.899 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.899 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:42.899 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.899 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.899 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:42.899 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:42.899 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.899 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:42.899 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.899 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.899 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:42.899 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:42.899 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.900 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:42.900 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.900 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.900 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:42.900 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:42.900 EAL: Hugepages will be freed exactly as allocated. 00:04:42.900 EAL: No shared files mode enabled, IPC is disabled 00:04:42.900 EAL: No shared files mode enabled, IPC is disabled 00:04:43.161 EAL: TSC frequency is ~2600000 KHz 00:04:43.161 EAL: Main lcore 0 is ready (tid=7fd9ed6c4a40;cpuset=[0]) 00:04:43.161 EAL: Trying to obtain current memory policy. 00:04:43.161 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.161 EAL: Restoring previous memory policy: 0 00:04:43.161 EAL: request: mp_malloc_sync 00:04:43.161 EAL: No shared files mode enabled, IPC is disabled 00:04:43.161 EAL: Heap on socket 0 was expanded by 2MB 00:04:43.161 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:43.161 EAL: No shared files mode enabled, IPC is disabled 00:04:43.161 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:43.161 EAL: Mem event callback 'spdk:(nil)' registered 00:04:43.161 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:43.161 00:04:43.161 00:04:43.161 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.161 http://cunit.sourceforge.net/ 00:04:43.161 00:04:43.161 00:04:43.161 Suite: components_suite 00:04:43.421 Test: vtophys_malloc_test ...passed 00:04:43.421 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:43.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.421 EAL: Restoring previous memory policy: 4 00:04:43.421 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.421 EAL: request: mp_malloc_sync 00:04:43.421 EAL: No shared files mode enabled, IPC is disabled 00:04:43.421 EAL: Heap on socket 0 was expanded by 4MB 00:04:43.421 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.421 EAL: request: mp_malloc_sync 00:04:43.422 EAL: No shared files mode enabled, IPC is disabled 00:04:43.422 EAL: Heap on socket 0 was shrunk by 4MB 00:04:43.422 EAL: Trying to obtain current memory policy. 00:04:43.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.422 EAL: Restoring previous memory policy: 4 00:04:43.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.422 EAL: request: mp_malloc_sync 00:04:43.422 EAL: No shared files mode enabled, IPC is disabled 00:04:43.422 EAL: Heap on socket 0 was expanded by 6MB 00:04:43.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.422 EAL: request: mp_malloc_sync 00:04:43.422 EAL: No shared files mode enabled, IPC is disabled 00:04:43.422 EAL: Heap on socket 0 was shrunk by 6MB 00:04:43.422 EAL: Trying to obtain current memory policy. 00:04:43.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.422 EAL: Restoring previous memory policy: 4 00:04:43.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.422 EAL: request: mp_malloc_sync 00:04:43.422 EAL: No shared files mode enabled, IPC is disabled 00:04:43.422 EAL: Heap on socket 0 was expanded by 10MB 00:04:43.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.422 EAL: request: mp_malloc_sync 00:04:43.422 EAL: No shared files mode enabled, IPC is disabled 00:04:43.422 EAL: Heap on socket 0 was shrunk by 10MB 00:04:43.422 EAL: Trying to obtain current memory policy. 00:04:43.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.422 EAL: Restoring previous memory policy: 4 00:04:43.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.422 EAL: request: mp_malloc_sync 00:04:43.422 EAL: No shared files mode enabled, IPC is disabled 00:04:43.422 EAL: Heap on socket 0 was expanded by 18MB 00:04:43.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.422 EAL: request: mp_malloc_sync 00:04:43.422 EAL: No shared files mode enabled, IPC is disabled 00:04:43.422 EAL: Heap on socket 0 was shrunk by 18MB 00:04:43.422 EAL: Trying to obtain current memory policy. 00:04:43.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.422 EAL: Restoring previous memory policy: 4 00:04:43.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.422 EAL: request: mp_malloc_sync 00:04:43.422 EAL: No shared files mode enabled, IPC is disabled 00:04:43.422 EAL: Heap on socket 0 was expanded by 34MB 00:04:43.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.422 EAL: request: mp_malloc_sync 00:04:43.422 EAL: No shared files mode enabled, IPC is disabled 00:04:43.422 EAL: Heap on socket 0 was shrunk by 34MB 00:04:43.422 EAL: Trying to obtain current memory policy. 00:04:43.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.422 EAL: Restoring previous memory policy: 4 00:04:43.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.422 EAL: request: mp_malloc_sync 00:04:43.422 EAL: No shared files mode enabled, IPC is disabled 00:04:43.422 EAL: Heap on socket 0 was expanded by 66MB 00:04:43.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.422 EAL: request: mp_malloc_sync 00:04:43.422 EAL: No shared files mode enabled, IPC is disabled 00:04:43.422 EAL: Heap on socket 0 was shrunk by 66MB 00:04:43.422 EAL: Trying to obtain current memory policy. 00:04:43.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.422 EAL: Restoring previous memory policy: 4 00:04:43.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.422 EAL: request: mp_malloc_sync 00:04:43.422 EAL: No shared files mode enabled, IPC is disabled 00:04:43.422 EAL: Heap on socket 0 was expanded by 130MB 00:04:43.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.422 EAL: request: mp_malloc_sync 00:04:43.422 EAL: No shared files mode enabled, IPC is disabled 00:04:43.422 EAL: Heap on socket 0 was shrunk by 130MB 00:04:43.422 EAL: Trying to obtain current memory policy. 00:04:43.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.422 EAL: Restoring previous memory policy: 4 00:04:43.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.422 EAL: request: mp_malloc_sync 00:04:43.422 EAL: No shared files mode enabled, IPC is disabled 00:04:43.422 EAL: Heap on socket 0 was expanded by 258MB 00:04:43.683 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.683 EAL: request: mp_malloc_sync 00:04:43.683 EAL: No shared files mode enabled, IPC is disabled 00:04:43.683 EAL: Heap on socket 0 was shrunk by 258MB 00:04:43.683 EAL: Trying to obtain current memory policy. 00:04:43.683 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.683 EAL: Restoring previous memory policy: 4 00:04:43.683 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.683 EAL: request: mp_malloc_sync 00:04:43.683 EAL: No shared files mode enabled, IPC is disabled 00:04:43.683 EAL: Heap on socket 0 was expanded by 514MB 00:04:43.683 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.683 EAL: request: mp_malloc_sync 00:04:43.683 EAL: No shared files mode enabled, IPC is disabled 00:04:43.683 EAL: Heap on socket 0 was shrunk by 514MB 00:04:43.683 EAL: Trying to obtain current memory policy. 00:04:43.683 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.944 EAL: Restoring previous memory policy: 4 00:04:43.944 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.944 EAL: request: mp_malloc_sync 00:04:43.944 EAL: No shared files mode enabled, IPC is disabled 00:04:43.944 EAL: Heap on socket 0 was expanded by 1026MB 00:04:43.944 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.204 passed 00:04:44.205 00:04:44.205 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.205 suites 1 1 n/a 0 0 00:04:44.205 tests 2 2 2 0 0 00:04:44.205 asserts 6275 6275 6275 0 n/a 00:04:44.205 00:04:44.205 Elapsed time = 0.980 seconds 00:04:44.205 EAL: request: mp_malloc_sync 00:04:44.205 EAL: No shared files mode enabled, IPC is disabled 00:04:44.205 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:44.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.205 EAL: request: mp_malloc_sync 00:04:44.205 EAL: No shared files mode enabled, IPC is disabled 00:04:44.205 EAL: Heap on socket 0 was shrunk by 2MB 00:04:44.205 EAL: No shared files mode enabled, IPC is disabled 00:04:44.205 EAL: No shared files mode enabled, IPC is disabled 00:04:44.205 EAL: No shared files mode enabled, IPC is disabled 00:04:44.205 00:04:44.205 real 0m1.201s 00:04:44.205 user 0m0.487s 00:04:44.205 sys 0m0.576s 00:04:44.205 02:52:00 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.205 02:52:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:44.205 ************************************ 00:04:44.205 END TEST env_vtophys 00:04:44.205 ************************************ 00:04:44.205 02:52:00 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:44.205 02:52:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.205 02:52:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.205 02:52:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.205 ************************************ 00:04:44.205 START TEST env_pci 00:04:44.205 ************************************ 00:04:44.205 02:52:00 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:44.205 00:04:44.205 00:04:44.205 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.205 http://cunit.sourceforge.net/ 00:04:44.205 00:04:44.205 00:04:44.205 Suite: pci 00:04:44.205 Test: pci_hook ...[2024-11-29 02:52:00.118638] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68904 has claimed it 00:04:44.205 passed 00:04:44.205 00:04:44.205 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.205 suites 1 1 n/a 0 0 00:04:44.205 tests 1 1 1 0 0 00:04:44.205 asserts 25 25 25 0 n/a 00:04:44.205 00:04:44.205 Elapsed time = 0.006 seconds 00:04:44.205 EAL: Cannot find device (10000:00:01.0) 00:04:44.205 EAL: Failed to attach device on primary process 00:04:44.205 00:04:44.205 real 0m0.055s 00:04:44.205 user 0m0.028s 00:04:44.205 sys 0m0.026s 00:04:44.205 02:52:00 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.205 ************************************ 00:04:44.205 END TEST env_pci 00:04:44.205 ************************************ 00:04:44.205 02:52:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:44.467 02:52:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:44.467 02:52:00 env -- env/env.sh@15 -- # uname 00:04:44.467 02:52:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:44.467 02:52:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:44.467 02:52:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.467 02:52:00 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:44.467 02:52:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.467 02:52:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.467 ************************************ 00:04:44.467 START TEST env_dpdk_post_init 00:04:44.467 ************************************ 00:04:44.467 02:52:00 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.467 EAL: Detected CPU lcores: 10 00:04:44.467 EAL: Detected NUMA nodes: 1 00:04:44.467 EAL: Detected shared linkage of DPDK 00:04:44.467 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.467 EAL: Selected IOVA mode 'PA' 00:04:44.467 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.467 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:44.467 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:44.467 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:44.467 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:44.467 Starting DPDK initialization... 00:04:44.467 Starting SPDK post initialization... 00:04:44.467 SPDK NVMe probe 00:04:44.467 Attaching to 0000:00:10.0 00:04:44.467 Attaching to 0000:00:11.0 00:04:44.467 Attaching to 0000:00:12.0 00:04:44.467 Attaching to 0000:00:13.0 00:04:44.467 Attached to 0000:00:13.0 00:04:44.467 Attached to 0000:00:10.0 00:04:44.467 Attached to 0000:00:11.0 00:04:44.467 Attached to 0000:00:12.0 00:04:44.467 Cleaning up... 00:04:44.467 00:04:44.467 real 0m0.216s 00:04:44.467 user 0m0.068s 00:04:44.467 sys 0m0.050s 00:04:44.467 ************************************ 00:04:44.467 02:52:00 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.467 02:52:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.467 END TEST env_dpdk_post_init 00:04:44.467 ************************************ 00:04:44.727 02:52:00 env -- env/env.sh@26 -- # uname 00:04:44.727 02:52:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:44.727 02:52:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.727 02:52:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.727 02:52:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.727 02:52:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.727 ************************************ 00:04:44.727 START TEST env_mem_callbacks 00:04:44.727 ************************************ 00:04:44.727 02:52:00 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.727 EAL: Detected CPU lcores: 10 00:04:44.727 EAL: Detected NUMA nodes: 1 00:04:44.727 EAL: Detected shared linkage of DPDK 00:04:44.727 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.727 EAL: Selected IOVA mode 'PA' 00:04:44.727 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.727 00:04:44.727 00:04:44.727 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.727 http://cunit.sourceforge.net/ 00:04:44.727 00:04:44.727 00:04:44.727 Suite: memory 00:04:44.727 Test: test ... 00:04:44.727 register 0x200000200000 2097152 00:04:44.727 malloc 3145728 00:04:44.727 register 0x200000400000 4194304 00:04:44.727 buf 0x200000500000 len 3145728 PASSED 00:04:44.727 malloc 64 00:04:44.727 buf 0x2000004fff40 len 64 PASSED 00:04:44.727 malloc 4194304 00:04:44.727 register 0x200000800000 6291456 00:04:44.727 buf 0x200000a00000 len 4194304 PASSED 00:04:44.727 free 0x200000500000 3145728 00:04:44.727 free 0x2000004fff40 64 00:04:44.727 unregister 0x200000400000 4194304 PASSED 00:04:44.727 free 0x200000a00000 4194304 00:04:44.727 unregister 0x200000800000 6291456 PASSED 00:04:44.727 malloc 8388608 00:04:44.727 register 0x200000400000 10485760 00:04:44.727 buf 0x200000600000 len 8388608 PASSED 00:04:44.727 free 0x200000600000 8388608 00:04:44.727 unregister 0x200000400000 10485760 PASSED 00:04:44.727 passed 00:04:44.727 00:04:44.727 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.727 suites 1 1 n/a 0 0 00:04:44.727 tests 1 1 1 0 0 00:04:44.727 asserts 15 15 15 0 n/a 00:04:44.727 00:04:44.727 Elapsed time = 0.010 seconds 00:04:44.727 00:04:44.727 real 0m0.152s 00:04:44.727 user 0m0.020s 00:04:44.727 sys 0m0.029s 00:04:44.727 02:52:00 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.727 ************************************ 00:04:44.727 END TEST env_mem_callbacks 00:04:44.727 ************************************ 00:04:44.727 02:52:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:44.727 ************************************ 00:04:44.727 END TEST env 00:04:44.727 ************************************ 00:04:44.727 00:04:44.727 real 0m2.349s 00:04:44.727 user 0m0.975s 00:04:44.727 sys 0m0.914s 00:04:44.727 02:52:00 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.727 02:52:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.989 02:52:00 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:44.989 02:52:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.989 02:52:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.989 02:52:00 -- common/autotest_common.sh@10 -- # set +x 00:04:44.989 ************************************ 00:04:44.989 START TEST rpc 00:04:44.989 ************************************ 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:44.989 * Looking for test storage... 00:04:44.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.989 02:52:00 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.989 02:52:00 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.989 02:52:00 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.989 02:52:00 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.989 02:52:00 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.989 02:52:00 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.989 02:52:00 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.989 02:52:00 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.989 02:52:00 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.989 02:52:00 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.989 02:52:00 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.989 02:52:00 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:44.989 02:52:00 rpc -- scripts/common.sh@345 -- # : 1 00:04:44.989 02:52:00 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.989 02:52:00 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.989 02:52:00 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:44.989 02:52:00 rpc -- scripts/common.sh@353 -- # local d=1 00:04:44.989 02:52:00 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.989 02:52:00 rpc -- scripts/common.sh@355 -- # echo 1 00:04:44.989 02:52:00 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.989 02:52:00 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:44.989 02:52:00 rpc -- scripts/common.sh@353 -- # local d=2 00:04:44.989 02:52:00 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.989 02:52:00 rpc -- scripts/common.sh@355 -- # echo 2 00:04:44.989 02:52:00 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.989 02:52:00 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.989 02:52:00 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.989 02:52:00 rpc -- scripts/common.sh@368 -- # return 0 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.989 --rc genhtml_branch_coverage=1 00:04:44.989 --rc genhtml_function_coverage=1 00:04:44.989 --rc genhtml_legend=1 00:04:44.989 --rc geninfo_all_blocks=1 00:04:44.989 --rc geninfo_unexecuted_blocks=1 00:04:44.989 00:04:44.989 ' 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.989 --rc genhtml_branch_coverage=1 00:04:44.989 --rc genhtml_function_coverage=1 00:04:44.989 --rc genhtml_legend=1 00:04:44.989 --rc geninfo_all_blocks=1 00:04:44.989 --rc geninfo_unexecuted_blocks=1 00:04:44.989 00:04:44.989 ' 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.989 --rc genhtml_branch_coverage=1 00:04:44.989 --rc genhtml_function_coverage=1 00:04:44.989 --rc genhtml_legend=1 00:04:44.989 --rc geninfo_all_blocks=1 00:04:44.989 --rc geninfo_unexecuted_blocks=1 00:04:44.989 00:04:44.989 ' 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.989 --rc genhtml_branch_coverage=1 00:04:44.989 --rc genhtml_function_coverage=1 00:04:44.989 --rc genhtml_legend=1 00:04:44.989 --rc geninfo_all_blocks=1 00:04:44.989 --rc geninfo_unexecuted_blocks=1 00:04:44.989 00:04:44.989 ' 00:04:44.989 02:52:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69031 00:04:44.989 02:52:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.989 02:52:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69031 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@835 -- # '[' -z 69031 ']' 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.989 02:52:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.989 02:52:00 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:44.989 [2024-11-29 02:52:00.965856] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:04:44.989 [2024-11-29 02:52:00.965968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69031 ] 00:04:45.251 [2024-11-29 02:52:01.112178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.251 [2024-11-29 02:52:01.130918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:45.251 [2024-11-29 02:52:01.130957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69031' to capture a snapshot of events at runtime. 00:04:45.251 [2024-11-29 02:52:01.130969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:45.251 [2024-11-29 02:52:01.130977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:45.251 [2024-11-29 02:52:01.130986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69031 for offline analysis/debug. 00:04:45.251 [2024-11-29 02:52:01.131280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.822 02:52:01 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.822 02:52:01 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:45.822 02:52:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.822 02:52:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.822 02:52:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:45.822 02:52:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:45.823 02:52:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.823 02:52:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.823 02:52:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.823 ************************************ 00:04:45.823 START TEST rpc_integrity 00:04:45.823 ************************************ 00:04:45.823 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:45.823 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:45.823 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.823 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.823 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.083 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:46.083 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:46.083 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:46.084 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:46.084 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.084 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.084 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.084 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:46.084 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:46.084 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.084 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.084 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.084 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:46.084 { 00:04:46.084 "name": "Malloc0", 00:04:46.084 "aliases": [ 00:04:46.084 "ef0dd930-2ea0-49f9-991a-58442c0dcc35" 00:04:46.084 ], 00:04:46.084 "product_name": "Malloc disk", 00:04:46.084 "block_size": 512, 00:04:46.084 "num_blocks": 16384, 00:04:46.084 "uuid": "ef0dd930-2ea0-49f9-991a-58442c0dcc35", 00:04:46.084 "assigned_rate_limits": { 00:04:46.084 "rw_ios_per_sec": 0, 00:04:46.084 "rw_mbytes_per_sec": 0, 00:04:46.084 "r_mbytes_per_sec": 0, 00:04:46.084 "w_mbytes_per_sec": 0 00:04:46.084 }, 00:04:46.084 "claimed": false, 00:04:46.084 "zoned": false, 00:04:46.084 "supported_io_types": { 00:04:46.084 "read": true, 00:04:46.084 "write": true, 00:04:46.084 "unmap": true, 00:04:46.084 "flush": true, 00:04:46.084 "reset": true, 00:04:46.084 "nvme_admin": false, 00:04:46.084 "nvme_io": false, 00:04:46.084 "nvme_io_md": false, 00:04:46.084 "write_zeroes": true, 00:04:46.084 "zcopy": true, 00:04:46.084 "get_zone_info": false, 00:04:46.084 "zone_management": false, 00:04:46.084 "zone_append": false, 00:04:46.084 "compare": false, 00:04:46.084 "compare_and_write": false, 00:04:46.084 "abort": true, 00:04:46.084 "seek_hole": false, 00:04:46.084 "seek_data": false, 00:04:46.084 "copy": true, 00:04:46.084 "nvme_iov_md": false 00:04:46.084 }, 00:04:46.084 "memory_domains": [ 00:04:46.084 { 00:04:46.084 "dma_device_id": "system", 00:04:46.084 "dma_device_type": 1 00:04:46.084 }, 00:04:46.084 { 00:04:46.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.084 "dma_device_type": 2 00:04:46.084 } 00:04:46.084 ], 00:04:46.084 "driver_specific": {} 00:04:46.084 } 00:04:46.084 ]' 00:04:46.084 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:46.084 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:46.084 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:46.084 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.084 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.084 [2024-11-29 02:52:01.915983] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:46.084 [2024-11-29 02:52:01.916037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:46.084 [2024-11-29 02:52:01.916065] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:46.084 [2024-11-29 02:52:01.916074] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:46.084 [2024-11-29 02:52:01.918322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:46.084 [2024-11-29 02:52:01.918355] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:46.084 Passthru0 00:04:46.084 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.084 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:46.084 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.084 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.084 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.084 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:46.084 { 00:04:46.084 "name": "Malloc0", 00:04:46.084 "aliases": [ 00:04:46.084 "ef0dd930-2ea0-49f9-991a-58442c0dcc35" 00:04:46.084 ], 00:04:46.084 "product_name": "Malloc disk", 00:04:46.084 "block_size": 512, 00:04:46.084 "num_blocks": 16384, 00:04:46.084 "uuid": "ef0dd930-2ea0-49f9-991a-58442c0dcc35", 00:04:46.084 "assigned_rate_limits": { 00:04:46.084 "rw_ios_per_sec": 0, 00:04:46.084 "rw_mbytes_per_sec": 0, 00:04:46.084 "r_mbytes_per_sec": 0, 00:04:46.084 "w_mbytes_per_sec": 0 00:04:46.084 }, 00:04:46.084 "claimed": true, 00:04:46.084 "claim_type": "exclusive_write", 00:04:46.084 "zoned": false, 00:04:46.084 "supported_io_types": { 00:04:46.084 "read": true, 00:04:46.084 "write": true, 00:04:46.084 "unmap": true, 00:04:46.084 "flush": true, 00:04:46.084 "reset": true, 00:04:46.084 "nvme_admin": false, 00:04:46.084 "nvme_io": false, 00:04:46.084 "nvme_io_md": false, 00:04:46.084 "write_zeroes": true, 00:04:46.084 "zcopy": true, 00:04:46.084 "get_zone_info": false, 00:04:46.084 "zone_management": false, 00:04:46.084 "zone_append": false, 00:04:46.084 "compare": false, 00:04:46.084 "compare_and_write": false, 00:04:46.084 "abort": true, 00:04:46.084 "seek_hole": false, 00:04:46.084 "seek_data": false, 00:04:46.084 "copy": true, 00:04:46.084 "nvme_iov_md": false 00:04:46.084 }, 00:04:46.084 "memory_domains": [ 00:04:46.084 { 00:04:46.084 "dma_device_id": "system", 00:04:46.084 "dma_device_type": 1 00:04:46.084 }, 00:04:46.084 { 00:04:46.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.084 "dma_device_type": 2 00:04:46.084 } 00:04:46.084 ], 00:04:46.084 "driver_specific": {} 00:04:46.084 }, 00:04:46.084 { 00:04:46.084 "name": "Passthru0", 00:04:46.084 "aliases": [ 00:04:46.084 "07e5b29c-8cb7-5322-9f31-f441e9855845" 00:04:46.084 ], 00:04:46.084 "product_name": "passthru", 00:04:46.084 "block_size": 512, 00:04:46.084 "num_blocks": 16384, 00:04:46.084 "uuid": "07e5b29c-8cb7-5322-9f31-f441e9855845", 00:04:46.084 "assigned_rate_limits": { 00:04:46.084 "rw_ios_per_sec": 0, 00:04:46.084 "rw_mbytes_per_sec": 0, 00:04:46.084 "r_mbytes_per_sec": 0, 00:04:46.084 "w_mbytes_per_sec": 0 00:04:46.084 }, 00:04:46.084 "claimed": false, 00:04:46.084 "zoned": false, 00:04:46.084 "supported_io_types": { 00:04:46.084 "read": true, 00:04:46.084 "write": true, 00:04:46.084 "unmap": true, 00:04:46.084 "flush": true, 00:04:46.084 "reset": true, 00:04:46.085 "nvme_admin": false, 00:04:46.085 "nvme_io": false, 00:04:46.085 "nvme_io_md": false, 00:04:46.085 "write_zeroes": true, 00:04:46.085 "zcopy": true, 00:04:46.085 "get_zone_info": false, 00:04:46.085 "zone_management": false, 00:04:46.085 "zone_append": false, 00:04:46.085 "compare": false, 00:04:46.085 "compare_and_write": false, 00:04:46.085 "abort": true, 00:04:46.085 "seek_hole": false, 00:04:46.085 "seek_data": false, 00:04:46.085 "copy": true, 00:04:46.085 "nvme_iov_md": false 00:04:46.085 }, 00:04:46.085 "memory_domains": [ 00:04:46.085 { 00:04:46.085 "dma_device_id": "system", 00:04:46.085 "dma_device_type": 1 00:04:46.085 }, 00:04:46.085 { 00:04:46.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.085 "dma_device_type": 2 00:04:46.085 } 00:04:46.085 ], 00:04:46.085 "driver_specific": { 00:04:46.085 "passthru": { 00:04:46.085 "name": "Passthru0", 00:04:46.085 "base_bdev_name": "Malloc0" 00:04:46.085 } 00:04:46.085 } 00:04:46.085 } 00:04:46.085 ]' 00:04:46.085 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:46.085 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.085 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.085 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.085 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.085 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.085 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:46.085 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.085 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.085 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.085 02:52:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.085 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.085 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.085 02:52:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.085 02:52:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.085 02:52:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.085 02:52:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.085 00:04:46.085 real 0m0.234s 00:04:46.085 user 0m0.134s 00:04:46.085 sys 0m0.029s 00:04:46.085 02:52:02 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.085 ************************************ 00:04:46.085 END TEST rpc_integrity 00:04:46.085 ************************************ 00:04:46.085 02:52:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.347 02:52:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:46.347 02:52:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.347 02:52:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.347 02:52:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.347 ************************************ 00:04:46.347 START TEST rpc_plugins 00:04:46.347 ************************************ 00:04:46.347 02:52:02 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:46.347 02:52:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:46.347 02:52:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.347 02:52:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.347 02:52:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.347 02:52:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:46.347 02:52:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:46.347 02:52:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.347 02:52:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.347 02:52:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.347 02:52:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:46.347 { 00:04:46.347 "name": "Malloc1", 00:04:46.347 "aliases": [ 00:04:46.347 "c0114ebc-a49e-42d0-812e-8c1c21daa9b4" 00:04:46.347 ], 00:04:46.347 "product_name": "Malloc disk", 00:04:46.347 "block_size": 4096, 00:04:46.347 "num_blocks": 256, 00:04:46.347 "uuid": "c0114ebc-a49e-42d0-812e-8c1c21daa9b4", 00:04:46.347 "assigned_rate_limits": { 00:04:46.347 "rw_ios_per_sec": 0, 00:04:46.347 "rw_mbytes_per_sec": 0, 00:04:46.347 "r_mbytes_per_sec": 0, 00:04:46.347 "w_mbytes_per_sec": 0 00:04:46.347 }, 00:04:46.347 "claimed": false, 00:04:46.347 "zoned": false, 00:04:46.347 "supported_io_types": { 00:04:46.347 "read": true, 00:04:46.347 "write": true, 00:04:46.347 "unmap": true, 00:04:46.347 "flush": true, 00:04:46.347 "reset": true, 00:04:46.347 "nvme_admin": false, 00:04:46.347 "nvme_io": false, 00:04:46.347 "nvme_io_md": false, 00:04:46.347 "write_zeroes": true, 00:04:46.347 "zcopy": true, 00:04:46.347 "get_zone_info": false, 00:04:46.347 "zone_management": false, 00:04:46.347 "zone_append": false, 00:04:46.347 "compare": false, 00:04:46.347 "compare_and_write": false, 00:04:46.347 "abort": true, 00:04:46.347 "seek_hole": false, 00:04:46.347 "seek_data": false, 00:04:46.347 "copy": true, 00:04:46.347 "nvme_iov_md": false 00:04:46.347 }, 00:04:46.347 "memory_domains": [ 00:04:46.347 { 00:04:46.347 "dma_device_id": "system", 00:04:46.347 "dma_device_type": 1 00:04:46.347 }, 00:04:46.347 { 00:04:46.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.347 "dma_device_type": 2 00:04:46.347 } 00:04:46.347 ], 00:04:46.347 "driver_specific": {} 00:04:46.347 } 00:04:46.347 ]' 00:04:46.347 02:52:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:46.347 02:52:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:46.347 02:52:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:46.347 02:52:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.347 02:52:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.347 02:52:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.347 02:52:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:46.347 02:52:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.347 02:52:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.347 02:52:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.347 02:52:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:46.347 02:52:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:46.347 02:52:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:46.347 00:04:46.347 real 0m0.113s 00:04:46.347 user 0m0.064s 00:04:46.347 sys 0m0.012s 00:04:46.348 02:52:02 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.348 02:52:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.348 ************************************ 00:04:46.348 END TEST rpc_plugins 00:04:46.348 ************************************ 00:04:46.348 02:52:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:46.348 02:52:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.348 02:52:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.348 02:52:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.348 ************************************ 00:04:46.348 START TEST rpc_trace_cmd_test 00:04:46.348 ************************************ 00:04:46.348 02:52:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:46.348 02:52:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:46.348 02:52:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:46.348 02:52:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.348 02:52:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:46.348 02:52:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.348 02:52:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:46.348 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69031", 00:04:46.348 "tpoint_group_mask": "0x8", 00:04:46.348 "iscsi_conn": { 00:04:46.348 "mask": "0x2", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 }, 00:04:46.348 "scsi": { 00:04:46.348 "mask": "0x4", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 }, 00:04:46.348 "bdev": { 00:04:46.348 "mask": "0x8", 00:04:46.348 "tpoint_mask": "0xffffffffffffffff" 00:04:46.348 }, 00:04:46.348 "nvmf_rdma": { 00:04:46.348 "mask": "0x10", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 }, 00:04:46.348 "nvmf_tcp": { 00:04:46.348 "mask": "0x20", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 }, 00:04:46.348 "ftl": { 00:04:46.348 "mask": "0x40", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 }, 00:04:46.348 "blobfs": { 00:04:46.348 "mask": "0x80", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 }, 00:04:46.348 "dsa": { 00:04:46.348 "mask": "0x200", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 }, 00:04:46.348 "thread": { 00:04:46.348 "mask": "0x400", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 }, 00:04:46.348 "nvme_pcie": { 00:04:46.348 "mask": "0x800", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 }, 00:04:46.348 "iaa": { 00:04:46.348 "mask": "0x1000", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 }, 00:04:46.348 "nvme_tcp": { 00:04:46.348 "mask": "0x2000", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 }, 00:04:46.348 "bdev_nvme": { 00:04:46.348 "mask": "0x4000", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 }, 00:04:46.348 "sock": { 00:04:46.348 "mask": "0x8000", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 }, 00:04:46.348 "blob": { 00:04:46.348 "mask": "0x10000", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 }, 00:04:46.348 "bdev_raid": { 00:04:46.348 "mask": "0x20000", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 }, 00:04:46.348 "scheduler": { 00:04:46.348 "mask": "0x40000", 00:04:46.348 "tpoint_mask": "0x0" 00:04:46.348 } 00:04:46.348 }' 00:04:46.348 02:52:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:46.348 02:52:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:46.348 02:52:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:46.608 02:52:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:46.608 02:52:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:46.608 02:52:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:46.608 02:52:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:46.608 02:52:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:46.608 02:52:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:46.608 02:52:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:46.608 00:04:46.608 real 0m0.184s 00:04:46.608 user 0m0.156s 00:04:46.608 sys 0m0.016s 00:04:46.608 02:52:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.609 02:52:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:46.609 ************************************ 00:04:46.609 END TEST rpc_trace_cmd_test 00:04:46.609 ************************************ 00:04:46.609 02:52:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:46.609 02:52:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:46.609 02:52:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:46.609 02:52:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.609 02:52:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.609 02:52:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.609 ************************************ 00:04:46.609 START TEST rpc_daemon_integrity 00:04:46.609 ************************************ 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:46.609 { 00:04:46.609 "name": "Malloc2", 00:04:46.609 "aliases": [ 00:04:46.609 "204c202d-3050-4949-b8bb-0a256833a88a" 00:04:46.609 ], 00:04:46.609 "product_name": "Malloc disk", 00:04:46.609 "block_size": 512, 00:04:46.609 "num_blocks": 16384, 00:04:46.609 "uuid": "204c202d-3050-4949-b8bb-0a256833a88a", 00:04:46.609 "assigned_rate_limits": { 00:04:46.609 "rw_ios_per_sec": 0, 00:04:46.609 "rw_mbytes_per_sec": 0, 00:04:46.609 "r_mbytes_per_sec": 0, 00:04:46.609 "w_mbytes_per_sec": 0 00:04:46.609 }, 00:04:46.609 "claimed": false, 00:04:46.609 "zoned": false, 00:04:46.609 "supported_io_types": { 00:04:46.609 "read": true, 00:04:46.609 "write": true, 00:04:46.609 "unmap": true, 00:04:46.609 "flush": true, 00:04:46.609 "reset": true, 00:04:46.609 "nvme_admin": false, 00:04:46.609 "nvme_io": false, 00:04:46.609 "nvme_io_md": false, 00:04:46.609 "write_zeroes": true, 00:04:46.609 "zcopy": true, 00:04:46.609 "get_zone_info": false, 00:04:46.609 "zone_management": false, 00:04:46.609 "zone_append": false, 00:04:46.609 "compare": false, 00:04:46.609 "compare_and_write": false, 00:04:46.609 "abort": true, 00:04:46.609 "seek_hole": false, 00:04:46.609 "seek_data": false, 00:04:46.609 "copy": true, 00:04:46.609 "nvme_iov_md": false 00:04:46.609 }, 00:04:46.609 "memory_domains": [ 00:04:46.609 { 00:04:46.609 "dma_device_id": "system", 00:04:46.609 "dma_device_type": 1 00:04:46.609 }, 00:04:46.609 { 00:04:46.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.609 "dma_device_type": 2 00:04:46.609 } 00:04:46.609 ], 00:04:46.609 "driver_specific": {} 00:04:46.609 } 00:04:46.609 ]' 00:04:46.609 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:46.871 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:46.871 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:46.871 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.871 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.871 [2024-11-29 02:52:02.604378] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:46.871 [2024-11-29 02:52:02.604426] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:46.871 [2024-11-29 02:52:02.604446] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:46.871 [2024-11-29 02:52:02.604455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:46.871 [2024-11-29 02:52:02.606655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:46.871 [2024-11-29 02:52:02.606687] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:46.871 Passthru0 00:04:46.871 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.871 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:46.871 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.871 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.871 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.871 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:46.871 { 00:04:46.871 "name": "Malloc2", 00:04:46.871 "aliases": [ 00:04:46.871 "204c202d-3050-4949-b8bb-0a256833a88a" 00:04:46.871 ], 00:04:46.871 "product_name": "Malloc disk", 00:04:46.871 "block_size": 512, 00:04:46.871 "num_blocks": 16384, 00:04:46.871 "uuid": "204c202d-3050-4949-b8bb-0a256833a88a", 00:04:46.871 "assigned_rate_limits": { 00:04:46.871 "rw_ios_per_sec": 0, 00:04:46.871 "rw_mbytes_per_sec": 0, 00:04:46.871 "r_mbytes_per_sec": 0, 00:04:46.871 "w_mbytes_per_sec": 0 00:04:46.871 }, 00:04:46.871 "claimed": true, 00:04:46.871 "claim_type": "exclusive_write", 00:04:46.871 "zoned": false, 00:04:46.871 "supported_io_types": { 00:04:46.871 "read": true, 00:04:46.871 "write": true, 00:04:46.871 "unmap": true, 00:04:46.871 "flush": true, 00:04:46.871 "reset": true, 00:04:46.871 "nvme_admin": false, 00:04:46.871 "nvme_io": false, 00:04:46.871 "nvme_io_md": false, 00:04:46.871 "write_zeroes": true, 00:04:46.871 "zcopy": true, 00:04:46.871 "get_zone_info": false, 00:04:46.871 "zone_management": false, 00:04:46.871 "zone_append": false, 00:04:46.871 "compare": false, 00:04:46.871 "compare_and_write": false, 00:04:46.871 "abort": true, 00:04:46.871 "seek_hole": false, 00:04:46.871 "seek_data": false, 00:04:46.871 "copy": true, 00:04:46.871 "nvme_iov_md": false 00:04:46.871 }, 00:04:46.871 "memory_domains": [ 00:04:46.871 { 00:04:46.871 "dma_device_id": "system", 00:04:46.871 "dma_device_type": 1 00:04:46.871 }, 00:04:46.871 { 00:04:46.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.871 "dma_device_type": 2 00:04:46.871 } 00:04:46.871 ], 00:04:46.871 "driver_specific": {} 00:04:46.871 }, 00:04:46.871 { 00:04:46.871 "name": "Passthru0", 00:04:46.871 "aliases": [ 00:04:46.871 "103ce2ff-d382-5d45-8ce1-f81bdd227b68" 00:04:46.871 ], 00:04:46.871 "product_name": "passthru", 00:04:46.871 "block_size": 512, 00:04:46.871 "num_blocks": 16384, 00:04:46.871 "uuid": "103ce2ff-d382-5d45-8ce1-f81bdd227b68", 00:04:46.871 "assigned_rate_limits": { 00:04:46.871 "rw_ios_per_sec": 0, 00:04:46.871 "rw_mbytes_per_sec": 0, 00:04:46.871 "r_mbytes_per_sec": 0, 00:04:46.871 "w_mbytes_per_sec": 0 00:04:46.871 }, 00:04:46.871 "claimed": false, 00:04:46.871 "zoned": false, 00:04:46.871 "supported_io_types": { 00:04:46.871 "read": true, 00:04:46.871 "write": true, 00:04:46.871 "unmap": true, 00:04:46.871 "flush": true, 00:04:46.871 "reset": true, 00:04:46.871 "nvme_admin": false, 00:04:46.871 "nvme_io": false, 00:04:46.871 "nvme_io_md": false, 00:04:46.872 "write_zeroes": true, 00:04:46.872 "zcopy": true, 00:04:46.872 "get_zone_info": false, 00:04:46.872 "zone_management": false, 00:04:46.872 "zone_append": false, 00:04:46.872 "compare": false, 00:04:46.872 "compare_and_write": false, 00:04:46.872 "abort": true, 00:04:46.872 "seek_hole": false, 00:04:46.872 "seek_data": false, 00:04:46.872 "copy": true, 00:04:46.872 "nvme_iov_md": false 00:04:46.872 }, 00:04:46.872 "memory_domains": [ 00:04:46.872 { 00:04:46.872 "dma_device_id": "system", 00:04:46.872 "dma_device_type": 1 00:04:46.872 }, 00:04:46.872 { 00:04:46.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.872 "dma_device_type": 2 00:04:46.872 } 00:04:46.872 ], 00:04:46.872 "driver_specific": { 00:04:46.872 "passthru": { 00:04:46.872 "name": "Passthru0", 00:04:46.872 "base_bdev_name": "Malloc2" 00:04:46.872 } 00:04:46.872 } 00:04:46.872 } 00:04:46.872 ]' 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.872 00:04:46.872 real 0m0.225s 00:04:46.872 user 0m0.142s 00:04:46.872 sys 0m0.025s 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.872 ************************************ 00:04:46.872 END TEST rpc_daemon_integrity 00:04:46.872 ************************************ 00:04:46.872 02:52:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.872 02:52:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:46.872 02:52:02 rpc -- rpc/rpc.sh@84 -- # killprocess 69031 00:04:46.872 02:52:02 rpc -- common/autotest_common.sh@954 -- # '[' -z 69031 ']' 00:04:46.872 02:52:02 rpc -- common/autotest_common.sh@958 -- # kill -0 69031 00:04:46.872 02:52:02 rpc -- common/autotest_common.sh@959 -- # uname 00:04:46.872 02:52:02 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.872 02:52:02 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69031 00:04:46.872 02:52:02 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.872 02:52:02 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.872 killing process with pid 69031 00:04:46.872 02:52:02 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69031' 00:04:46.872 02:52:02 rpc -- common/autotest_common.sh@973 -- # kill 69031 00:04:46.872 02:52:02 rpc -- common/autotest_common.sh@978 -- # wait 69031 00:04:47.133 00:04:47.133 real 0m2.294s 00:04:47.133 user 0m2.796s 00:04:47.133 sys 0m0.517s 00:04:47.133 02:52:03 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.134 ************************************ 00:04:47.134 END TEST rpc 00:04:47.134 ************************************ 00:04:47.134 02:52:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.134 02:52:03 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:47.134 02:52:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.134 02:52:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.134 02:52:03 -- common/autotest_common.sh@10 -- # set +x 00:04:47.134 ************************************ 00:04:47.134 START TEST skip_rpc 00:04:47.134 ************************************ 00:04:47.134 02:52:03 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:47.394 * Looking for test storage... 00:04:47.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:47.394 02:52:03 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.394 02:52:03 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.394 02:52:03 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.394 02:52:03 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.394 02:52:03 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:47.394 02:52:03 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.394 02:52:03 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.394 --rc genhtml_branch_coverage=1 00:04:47.394 --rc genhtml_function_coverage=1 00:04:47.394 --rc genhtml_legend=1 00:04:47.394 --rc geninfo_all_blocks=1 00:04:47.394 --rc geninfo_unexecuted_blocks=1 00:04:47.394 00:04:47.394 ' 00:04:47.394 02:52:03 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.394 --rc genhtml_branch_coverage=1 00:04:47.394 --rc genhtml_function_coverage=1 00:04:47.394 --rc genhtml_legend=1 00:04:47.394 --rc geninfo_all_blocks=1 00:04:47.394 --rc geninfo_unexecuted_blocks=1 00:04:47.394 00:04:47.394 ' 00:04:47.394 02:52:03 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.394 --rc genhtml_branch_coverage=1 00:04:47.394 --rc genhtml_function_coverage=1 00:04:47.394 --rc genhtml_legend=1 00:04:47.394 --rc geninfo_all_blocks=1 00:04:47.394 --rc geninfo_unexecuted_blocks=1 00:04:47.394 00:04:47.394 ' 00:04:47.394 02:52:03 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.395 --rc genhtml_branch_coverage=1 00:04:47.395 --rc genhtml_function_coverage=1 00:04:47.395 --rc genhtml_legend=1 00:04:47.395 --rc geninfo_all_blocks=1 00:04:47.395 --rc geninfo_unexecuted_blocks=1 00:04:47.395 00:04:47.395 ' 00:04:47.395 02:52:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.395 02:52:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:47.395 02:52:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:47.395 02:52:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.395 02:52:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.395 02:52:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.395 ************************************ 00:04:47.395 START TEST skip_rpc 00:04:47.395 ************************************ 00:04:47.395 02:52:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:47.395 02:52:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69227 00:04:47.395 02:52:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.395 02:52:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:47.395 02:52:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:47.395 [2024-11-29 02:52:03.315264] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:04:47.395 [2024-11-29 02:52:03.315392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69227 ] 00:04:47.656 [2024-11-29 02:52:03.461536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.656 [2024-11-29 02:52:03.480367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69227 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69227 ']' 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69227 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69227 00:04:52.966 killing process with pid 69227 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69227' 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69227 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69227 00:04:52.966 00:04:52.966 real 0m5.261s 00:04:52.966 user 0m4.933s 00:04:52.966 sys 0m0.229s 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.966 02:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.966 ************************************ 00:04:52.966 END TEST skip_rpc 00:04:52.966 ************************************ 00:04:52.966 02:52:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:52.966 02:52:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.966 02:52:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.966 02:52:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.966 ************************************ 00:04:52.966 START TEST skip_rpc_with_json 00:04:52.966 ************************************ 00:04:52.966 02:52:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:52.966 02:52:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:52.966 02:52:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69309 00:04:52.966 02:52:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.966 02:52:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69309 00:04:52.966 02:52:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69309 ']' 00:04:52.966 02:52:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.966 02:52:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.966 02:52:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.966 02:52:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.966 02:52:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.966 02:52:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.966 [2024-11-29 02:52:08.639777] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:04:52.966 [2024-11-29 02:52:08.639909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69309 ] 00:04:52.966 [2024-11-29 02:52:08.780958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.966 [2024-11-29 02:52:08.800004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.532 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.532 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:53.532 02:52:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:53.532 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.532 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.532 [2024-11-29 02:52:09.486856] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:53.532 request: 00:04:53.532 { 00:04:53.532 "trtype": "tcp", 00:04:53.532 "method": "nvmf_get_transports", 00:04:53.532 "req_id": 1 00:04:53.532 } 00:04:53.532 Got JSON-RPC error response 00:04:53.532 response: 00:04:53.532 { 00:04:53.532 "code": -19, 00:04:53.532 "message": "No such device" 00:04:53.532 } 00:04:53.532 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:53.532 02:52:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:53.532 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.533 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.533 [2024-11-29 02:52:09.494944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:53.533 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.533 02:52:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:53.533 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.533 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.791 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.791 02:52:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:53.791 { 00:04:53.791 "subsystems": [ 00:04:53.791 { 00:04:53.791 "subsystem": "fsdev", 00:04:53.791 "config": [ 00:04:53.791 { 00:04:53.791 "method": "fsdev_set_opts", 00:04:53.791 "params": { 00:04:53.791 "fsdev_io_pool_size": 65535, 00:04:53.791 "fsdev_io_cache_size": 256 00:04:53.791 } 00:04:53.791 } 00:04:53.791 ] 00:04:53.791 }, 00:04:53.791 { 00:04:53.791 "subsystem": "keyring", 00:04:53.791 "config": [] 00:04:53.791 }, 00:04:53.791 { 00:04:53.791 "subsystem": "iobuf", 00:04:53.791 "config": [ 00:04:53.791 { 00:04:53.791 "method": "iobuf_set_options", 00:04:53.791 "params": { 00:04:53.791 "small_pool_count": 8192, 00:04:53.791 "large_pool_count": 1024, 00:04:53.791 "small_bufsize": 8192, 00:04:53.791 "large_bufsize": 135168, 00:04:53.791 "enable_numa": false 00:04:53.791 } 00:04:53.791 } 00:04:53.791 ] 00:04:53.791 }, 00:04:53.791 { 00:04:53.791 "subsystem": "sock", 00:04:53.791 "config": [ 00:04:53.791 { 00:04:53.791 "method": "sock_set_default_impl", 00:04:53.791 "params": { 00:04:53.791 "impl_name": "posix" 00:04:53.791 } 00:04:53.791 }, 00:04:53.791 { 00:04:53.791 "method": "sock_impl_set_options", 00:04:53.792 "params": { 00:04:53.792 "impl_name": "ssl", 00:04:53.792 "recv_buf_size": 4096, 00:04:53.792 "send_buf_size": 4096, 00:04:53.792 "enable_recv_pipe": true, 00:04:53.792 "enable_quickack": false, 00:04:53.792 "enable_placement_id": 0, 00:04:53.792 "enable_zerocopy_send_server": true, 00:04:53.792 "enable_zerocopy_send_client": false, 00:04:53.792 "zerocopy_threshold": 0, 00:04:53.792 "tls_version": 0, 00:04:53.792 "enable_ktls": false 00:04:53.792 } 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "method": "sock_impl_set_options", 00:04:53.792 "params": { 00:04:53.792 "impl_name": "posix", 00:04:53.792 "recv_buf_size": 2097152, 00:04:53.792 "send_buf_size": 2097152, 00:04:53.792 "enable_recv_pipe": true, 00:04:53.792 "enable_quickack": false, 00:04:53.792 "enable_placement_id": 0, 00:04:53.792 "enable_zerocopy_send_server": true, 00:04:53.792 "enable_zerocopy_send_client": false, 00:04:53.792 "zerocopy_threshold": 0, 00:04:53.792 "tls_version": 0, 00:04:53.792 "enable_ktls": false 00:04:53.792 } 00:04:53.792 } 00:04:53.792 ] 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "subsystem": "vmd", 00:04:53.792 "config": [] 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "subsystem": "accel", 00:04:53.792 "config": [ 00:04:53.792 { 00:04:53.792 "method": "accel_set_options", 00:04:53.792 "params": { 00:04:53.792 "small_cache_size": 128, 00:04:53.792 "large_cache_size": 16, 00:04:53.792 "task_count": 2048, 00:04:53.792 "sequence_count": 2048, 00:04:53.792 "buf_count": 2048 00:04:53.792 } 00:04:53.792 } 00:04:53.792 ] 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "subsystem": "bdev", 00:04:53.792 "config": [ 00:04:53.792 { 00:04:53.792 "method": "bdev_set_options", 00:04:53.792 "params": { 00:04:53.792 "bdev_io_pool_size": 65535, 00:04:53.792 "bdev_io_cache_size": 256, 00:04:53.792 "bdev_auto_examine": true, 00:04:53.792 "iobuf_small_cache_size": 128, 00:04:53.792 "iobuf_large_cache_size": 16 00:04:53.792 } 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "method": "bdev_raid_set_options", 00:04:53.792 "params": { 00:04:53.792 "process_window_size_kb": 1024, 00:04:53.792 "process_max_bandwidth_mb_sec": 0 00:04:53.792 } 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "method": "bdev_iscsi_set_options", 00:04:53.792 "params": { 00:04:53.792 "timeout_sec": 30 00:04:53.792 } 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "method": "bdev_nvme_set_options", 00:04:53.792 "params": { 00:04:53.792 "action_on_timeout": "none", 00:04:53.792 "timeout_us": 0, 00:04:53.792 "timeout_admin_us": 0, 00:04:53.792 "keep_alive_timeout_ms": 10000, 00:04:53.792 "arbitration_burst": 0, 00:04:53.792 "low_priority_weight": 0, 00:04:53.792 "medium_priority_weight": 0, 00:04:53.792 "high_priority_weight": 0, 00:04:53.792 "nvme_adminq_poll_period_us": 10000, 00:04:53.792 "nvme_ioq_poll_period_us": 0, 00:04:53.792 "io_queue_requests": 0, 00:04:53.792 "delay_cmd_submit": true, 00:04:53.792 "transport_retry_count": 4, 00:04:53.792 "bdev_retry_count": 3, 00:04:53.792 "transport_ack_timeout": 0, 00:04:53.792 "ctrlr_loss_timeout_sec": 0, 00:04:53.792 "reconnect_delay_sec": 0, 00:04:53.792 "fast_io_fail_timeout_sec": 0, 00:04:53.792 "disable_auto_failback": false, 00:04:53.792 "generate_uuids": false, 00:04:53.792 "transport_tos": 0, 00:04:53.792 "nvme_error_stat": false, 00:04:53.792 "rdma_srq_size": 0, 00:04:53.792 "io_path_stat": false, 00:04:53.792 "allow_accel_sequence": false, 00:04:53.792 "rdma_max_cq_size": 0, 00:04:53.792 "rdma_cm_event_timeout_ms": 0, 00:04:53.792 "dhchap_digests": [ 00:04:53.792 "sha256", 00:04:53.792 "sha384", 00:04:53.792 "sha512" 00:04:53.792 ], 00:04:53.792 "dhchap_dhgroups": [ 00:04:53.792 "null", 00:04:53.792 "ffdhe2048", 00:04:53.792 "ffdhe3072", 00:04:53.792 "ffdhe4096", 00:04:53.792 "ffdhe6144", 00:04:53.792 "ffdhe8192" 00:04:53.792 ] 00:04:53.792 } 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "method": "bdev_nvme_set_hotplug", 00:04:53.792 "params": { 00:04:53.792 "period_us": 100000, 00:04:53.792 "enable": false 00:04:53.792 } 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "method": "bdev_wait_for_examine" 00:04:53.792 } 00:04:53.792 ] 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "subsystem": "scsi", 00:04:53.792 "config": null 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "subsystem": "scheduler", 00:04:53.792 "config": [ 00:04:53.792 { 00:04:53.792 "method": "framework_set_scheduler", 00:04:53.792 "params": { 00:04:53.792 "name": "static" 00:04:53.792 } 00:04:53.792 } 00:04:53.792 ] 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "subsystem": "vhost_scsi", 00:04:53.792 "config": [] 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "subsystem": "vhost_blk", 00:04:53.792 "config": [] 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "subsystem": "ublk", 00:04:53.792 "config": [] 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "subsystem": "nbd", 00:04:53.792 "config": [] 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "subsystem": "nvmf", 00:04:53.792 "config": [ 00:04:53.792 { 00:04:53.792 "method": "nvmf_set_config", 00:04:53.792 "params": { 00:04:53.792 "discovery_filter": "match_any", 00:04:53.792 "admin_cmd_passthru": { 00:04:53.792 "identify_ctrlr": false 00:04:53.792 }, 00:04:53.792 "dhchap_digests": [ 00:04:53.792 "sha256", 00:04:53.792 "sha384", 00:04:53.792 "sha512" 00:04:53.792 ], 00:04:53.792 "dhchap_dhgroups": [ 00:04:53.792 "null", 00:04:53.792 "ffdhe2048", 00:04:53.792 "ffdhe3072", 00:04:53.792 "ffdhe4096", 00:04:53.792 "ffdhe6144", 00:04:53.792 "ffdhe8192" 00:04:53.792 ] 00:04:53.792 } 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "method": "nvmf_set_max_subsystems", 00:04:53.792 "params": { 00:04:53.792 "max_subsystems": 1024 00:04:53.792 } 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "method": "nvmf_set_crdt", 00:04:53.792 "params": { 00:04:53.792 "crdt1": 0, 00:04:53.792 "crdt2": 0, 00:04:53.792 "crdt3": 0 00:04:53.792 } 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "method": "nvmf_create_transport", 00:04:53.792 "params": { 00:04:53.792 "trtype": "TCP", 00:04:53.792 "max_queue_depth": 128, 00:04:53.792 "max_io_qpairs_per_ctrlr": 127, 00:04:53.792 "in_capsule_data_size": 4096, 00:04:53.792 "max_io_size": 131072, 00:04:53.792 "io_unit_size": 131072, 00:04:53.792 "max_aq_depth": 128, 00:04:53.792 "num_shared_buffers": 511, 00:04:53.792 "buf_cache_size": 4294967295, 00:04:53.792 "dif_insert_or_strip": false, 00:04:53.792 "zcopy": false, 00:04:53.792 "c2h_success": true, 00:04:53.792 "sock_priority": 0, 00:04:53.792 "abort_timeout_sec": 1, 00:04:53.792 "ack_timeout": 0, 00:04:53.792 "data_wr_pool_size": 0 00:04:53.792 } 00:04:53.792 } 00:04:53.792 ] 00:04:53.792 }, 00:04:53.792 { 00:04:53.792 "subsystem": "iscsi", 00:04:53.792 "config": [ 00:04:53.792 { 00:04:53.792 "method": "iscsi_set_options", 00:04:53.792 "params": { 00:04:53.792 "node_base": "iqn.2016-06.io.spdk", 00:04:53.792 "max_sessions": 128, 00:04:53.792 "max_connections_per_session": 2, 00:04:53.792 "max_queue_depth": 64, 00:04:53.792 "default_time2wait": 2, 00:04:53.792 "default_time2retain": 20, 00:04:53.792 "first_burst_length": 8192, 00:04:53.792 "immediate_data": true, 00:04:53.792 "allow_duplicated_isid": false, 00:04:53.792 "error_recovery_level": 0, 00:04:53.792 "nop_timeout": 60, 00:04:53.792 "nop_in_interval": 30, 00:04:53.792 "disable_chap": false, 00:04:53.792 "require_chap": false, 00:04:53.792 "mutual_chap": false, 00:04:53.792 "chap_group": 0, 00:04:53.792 "max_large_datain_per_connection": 64, 00:04:53.792 "max_r2t_per_connection": 4, 00:04:53.792 "pdu_pool_size": 36864, 00:04:53.792 "immediate_data_pool_size": 16384, 00:04:53.792 "data_out_pool_size": 2048 00:04:53.792 } 00:04:53.792 } 00:04:53.792 ] 00:04:53.792 } 00:04:53.792 ] 00:04:53.792 } 00:04:53.792 02:52:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:53.792 02:52:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69309 00:04:53.792 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69309 ']' 00:04:53.792 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69309 00:04:53.792 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:53.792 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.792 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69309 00:04:53.792 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.792 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.792 killing process with pid 69309 00:04:53.793 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69309' 00:04:53.793 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69309 00:04:53.793 02:52:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69309 00:04:54.055 02:52:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69338 00:04:54.055 02:52:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:54.055 02:52:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:59.318 02:52:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69338 00:04:59.318 02:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69338 ']' 00:04:59.318 02:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69338 00:04:59.318 02:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:59.318 02:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.318 02:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69338 00:04:59.318 02:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.318 killing process with pid 69338 00:04:59.318 02:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.318 02:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69338' 00:04:59.318 02:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69338 00:04:59.318 02:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69338 00:04:59.318 02:52:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:59.318 02:52:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:59.318 00:04:59.318 real 0m6.586s 00:04:59.318 user 0m6.313s 00:04:59.318 sys 0m0.510s 00:04:59.318 02:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.318 ************************************ 00:04:59.318 END TEST skip_rpc_with_json 00:04:59.318 ************************************ 00:04:59.318 02:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.318 02:52:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:59.318 02:52:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.318 02:52:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.318 02:52:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.318 ************************************ 00:04:59.318 START TEST skip_rpc_with_delay 00:04:59.318 ************************************ 00:04:59.318 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:59.318 02:52:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:59.318 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:59.318 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:59.318 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.318 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.318 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.318 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.318 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.318 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.319 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.319 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:59.319 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:59.319 [2024-11-29 02:52:15.269124] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:59.583 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:59.583 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.583 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:59.583 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.583 00:04:59.583 real 0m0.103s 00:04:59.583 user 0m0.046s 00:04:59.583 sys 0m0.056s 00:04:59.583 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.583 02:52:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:59.583 ************************************ 00:04:59.583 END TEST skip_rpc_with_delay 00:04:59.583 ************************************ 00:04:59.583 02:52:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:59.583 02:52:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:59.583 02:52:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:59.583 02:52:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.583 02:52:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.583 02:52:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.583 ************************************ 00:04:59.583 START TEST exit_on_failed_rpc_init 00:04:59.583 ************************************ 00:04:59.583 02:52:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:59.583 02:52:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69449 00:04:59.583 02:52:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69449 00:04:59.583 02:52:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69449 ']' 00:04:59.583 02:52:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.583 02:52:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.583 02:52:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.583 02:52:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.583 02:52:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.583 02:52:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.583 [2024-11-29 02:52:15.433612] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:04:59.583 [2024-11-29 02:52:15.433725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69449 ] 00:04:59.840 [2024-11-29 02:52:15.573792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.840 [2024-11-29 02:52:15.590217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.404 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.404 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:00.404 02:52:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.404 02:52:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.404 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:00.404 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.404 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.404 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.404 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.404 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.404 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.404 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.404 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.404 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:00.404 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.404 [2024-11-29 02:52:16.295450] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:00.404 [2024-11-29 02:52:16.295558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69462 ] 00:05:00.662 [2024-11-29 02:52:16.439335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.662 [2024-11-29 02:52:16.457305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.662 [2024-11-29 02:52:16.457376] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:00.662 [2024-11-29 02:52:16.457390] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:00.662 [2024-11-29 02:52:16.457399] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69449 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69449 ']' 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69449 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69449 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.662 killing process with pid 69449 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69449' 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69449 00:05:00.662 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69449 00:05:00.921 00:05:00.921 real 0m1.396s 00:05:00.921 user 0m1.525s 00:05:00.921 sys 0m0.322s 00:05:00.921 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.921 ************************************ 00:05:00.921 END TEST exit_on_failed_rpc_init 00:05:00.921 02:52:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.921 ************************************ 00:05:00.921 02:52:16 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.921 00:05:00.921 real 0m13.719s 00:05:00.921 user 0m12.948s 00:05:00.921 sys 0m1.299s 00:05:00.921 02:52:16 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.921 ************************************ 00:05:00.921 END TEST skip_rpc 00:05:00.921 ************************************ 00:05:00.921 02:52:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.921 02:52:16 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:00.921 02:52:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.921 02:52:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.921 02:52:16 -- common/autotest_common.sh@10 -- # set +x 00:05:00.921 ************************************ 00:05:00.921 START TEST rpc_client 00:05:00.921 ************************************ 00:05:00.921 02:52:16 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:01.183 * Looking for test storage... 00:05:01.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:01.183 02:52:16 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.183 02:52:16 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.183 02:52:16 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.183 02:52:16 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.183 02:52:16 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.183 02:52:16 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.183 02:52:16 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.183 02:52:17 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:01.183 02:52:17 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.183 02:52:17 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.183 --rc genhtml_branch_coverage=1 00:05:01.183 --rc genhtml_function_coverage=1 00:05:01.183 --rc genhtml_legend=1 00:05:01.183 --rc geninfo_all_blocks=1 00:05:01.183 --rc geninfo_unexecuted_blocks=1 00:05:01.183 00:05:01.183 ' 00:05:01.183 02:52:17 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.183 --rc genhtml_branch_coverage=1 00:05:01.183 --rc genhtml_function_coverage=1 00:05:01.183 --rc genhtml_legend=1 00:05:01.183 --rc geninfo_all_blocks=1 00:05:01.183 --rc geninfo_unexecuted_blocks=1 00:05:01.183 00:05:01.183 ' 00:05:01.183 02:52:17 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.183 --rc genhtml_branch_coverage=1 00:05:01.183 --rc genhtml_function_coverage=1 00:05:01.183 --rc genhtml_legend=1 00:05:01.183 --rc geninfo_all_blocks=1 00:05:01.183 --rc geninfo_unexecuted_blocks=1 00:05:01.183 00:05:01.183 ' 00:05:01.183 02:52:17 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.183 --rc genhtml_branch_coverage=1 00:05:01.183 --rc genhtml_function_coverage=1 00:05:01.183 --rc genhtml_legend=1 00:05:01.183 --rc geninfo_all_blocks=1 00:05:01.183 --rc geninfo_unexecuted_blocks=1 00:05:01.183 00:05:01.183 ' 00:05:01.183 02:52:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:01.183 OK 00:05:01.183 02:52:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:01.183 00:05:01.183 real 0m0.180s 00:05:01.183 user 0m0.104s 00:05:01.183 sys 0m0.081s 00:05:01.183 02:52:17 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.183 ************************************ 00:05:01.183 END TEST rpc_client 00:05:01.183 ************************************ 00:05:01.183 02:52:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:01.183 02:52:17 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:01.183 02:52:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.183 02:52:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.183 02:52:17 -- common/autotest_common.sh@10 -- # set +x 00:05:01.183 ************************************ 00:05:01.183 START TEST json_config 00:05:01.183 ************************************ 00:05:01.183 02:52:17 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:01.183 02:52:17 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.183 02:52:17 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.183 02:52:17 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.446 02:52:17 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.446 02:52:17 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.446 02:52:17 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.446 02:52:17 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.446 02:52:17 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.446 02:52:17 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.446 02:52:17 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.446 02:52:17 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.446 02:52:17 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.446 02:52:17 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.446 02:52:17 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.446 02:52:17 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.446 02:52:17 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:01.446 02:52:17 json_config -- scripts/common.sh@345 -- # : 1 00:05:01.446 02:52:17 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.446 02:52:17 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.446 02:52:17 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:01.446 02:52:17 json_config -- scripts/common.sh@353 -- # local d=1 00:05:01.446 02:52:17 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.446 02:52:17 json_config -- scripts/common.sh@355 -- # echo 1 00:05:01.446 02:52:17 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.446 02:52:17 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:01.446 02:52:17 json_config -- scripts/common.sh@353 -- # local d=2 00:05:01.446 02:52:17 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.446 02:52:17 json_config -- scripts/common.sh@355 -- # echo 2 00:05:01.446 02:52:17 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.446 02:52:17 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.446 02:52:17 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.446 02:52:17 json_config -- scripts/common.sh@368 -- # return 0 00:05:01.446 02:52:17 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.446 02:52:17 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.446 --rc genhtml_branch_coverage=1 00:05:01.446 --rc genhtml_function_coverage=1 00:05:01.446 --rc genhtml_legend=1 00:05:01.446 --rc geninfo_all_blocks=1 00:05:01.446 --rc geninfo_unexecuted_blocks=1 00:05:01.446 00:05:01.446 ' 00:05:01.446 02:52:17 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.447 --rc genhtml_branch_coverage=1 00:05:01.447 --rc genhtml_function_coverage=1 00:05:01.447 --rc genhtml_legend=1 00:05:01.447 --rc geninfo_all_blocks=1 00:05:01.447 --rc geninfo_unexecuted_blocks=1 00:05:01.447 00:05:01.447 ' 00:05:01.447 02:52:17 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.447 --rc genhtml_branch_coverage=1 00:05:01.447 --rc genhtml_function_coverage=1 00:05:01.447 --rc genhtml_legend=1 00:05:01.447 --rc geninfo_all_blocks=1 00:05:01.447 --rc geninfo_unexecuted_blocks=1 00:05:01.447 00:05:01.447 ' 00:05:01.447 02:52:17 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.447 --rc genhtml_branch_coverage=1 00:05:01.447 --rc genhtml_function_coverage=1 00:05:01.447 --rc genhtml_legend=1 00:05:01.447 --rc geninfo_all_blocks=1 00:05:01.447 --rc geninfo_unexecuted_blocks=1 00:05:01.447 00:05:01.447 ' 00:05:01.447 02:52:17 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e9fb4c6f-1640-4e65-b787-fce1b76805ec 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e9fb4c6f-1640-4e65-b787-fce1b76805ec 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:01.447 02:52:17 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.447 02:52:17 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.447 02:52:17 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.447 02:52:17 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.447 02:52:17 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.447 02:52:17 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.447 02:52:17 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.447 02:52:17 json_config -- paths/export.sh@5 -- # export PATH 00:05:01.447 02:52:17 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@51 -- # : 0 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.447 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.447 02:52:17 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.447 02:52:17 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:01.447 02:52:17 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:01.447 02:52:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:01.447 02:52:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:01.447 02:52:17 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:01.447 WARNING: No tests are enabled so not running JSON configuration tests 00:05:01.447 02:52:17 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:01.447 02:52:17 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:01.447 00:05:01.447 real 0m0.136s 00:05:01.447 user 0m0.091s 00:05:01.447 sys 0m0.047s 00:05:01.447 02:52:17 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.447 ************************************ 00:05:01.447 END TEST json_config 00:05:01.447 ************************************ 00:05:01.447 02:52:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.447 02:52:17 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:01.447 02:52:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.447 02:52:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.447 02:52:17 -- common/autotest_common.sh@10 -- # set +x 00:05:01.447 ************************************ 00:05:01.447 START TEST json_config_extra_key 00:05:01.447 ************************************ 00:05:01.447 02:52:17 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:01.447 02:52:17 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.447 02:52:17 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.447 02:52:17 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.447 02:52:17 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.447 02:52:17 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:01.447 02:52:17 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.447 02:52:17 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.447 --rc genhtml_branch_coverage=1 00:05:01.447 --rc genhtml_function_coverage=1 00:05:01.447 --rc genhtml_legend=1 00:05:01.447 --rc geninfo_all_blocks=1 00:05:01.447 --rc geninfo_unexecuted_blocks=1 00:05:01.447 00:05:01.447 ' 00:05:01.447 02:52:17 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.447 --rc genhtml_branch_coverage=1 00:05:01.447 --rc genhtml_function_coverage=1 00:05:01.447 --rc genhtml_legend=1 00:05:01.447 --rc geninfo_all_blocks=1 00:05:01.447 --rc geninfo_unexecuted_blocks=1 00:05:01.447 00:05:01.447 ' 00:05:01.447 02:52:17 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.447 --rc genhtml_branch_coverage=1 00:05:01.447 --rc genhtml_function_coverage=1 00:05:01.447 --rc genhtml_legend=1 00:05:01.448 --rc geninfo_all_blocks=1 00:05:01.448 --rc geninfo_unexecuted_blocks=1 00:05:01.448 00:05:01.448 ' 00:05:01.448 02:52:17 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.448 --rc genhtml_branch_coverage=1 00:05:01.448 --rc genhtml_function_coverage=1 00:05:01.448 --rc genhtml_legend=1 00:05:01.448 --rc geninfo_all_blocks=1 00:05:01.448 --rc geninfo_unexecuted_blocks=1 00:05:01.448 00:05:01.448 ' 00:05:01.448 02:52:17 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e9fb4c6f-1640-4e65-b787-fce1b76805ec 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e9fb4c6f-1640-4e65-b787-fce1b76805ec 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:01.448 02:52:17 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.448 02:52:17 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.448 02:52:17 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.448 02:52:17 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.448 02:52:17 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.448 02:52:17 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.448 02:52:17 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.448 02:52:17 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:01.448 02:52:17 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.448 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.448 02:52:17 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.448 02:52:17 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:01.448 02:52:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:01.448 02:52:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:01.448 02:52:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:01.448 02:52:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:01.448 02:52:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:01.448 02:52:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:01.448 02:52:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:01.448 02:52:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:01.448 INFO: launching applications... 00:05:01.448 02:52:17 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:01.448 02:52:17 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:01.448 02:52:17 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:01.448 02:52:17 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:01.448 02:52:17 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:01.448 02:52:17 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:01.448 02:52:17 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:01.448 02:52:17 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:01.448 02:52:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.448 02:52:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.448 Waiting for target to run... 00:05:01.448 02:52:17 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69644 00:05:01.448 02:52:17 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:01.448 02:52:17 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69644 /var/tmp/spdk_tgt.sock 00:05:01.448 02:52:17 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69644 ']' 00:05:01.448 02:52:17 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.448 02:52:17 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:01.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.448 02:52:17 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.448 02:52:17 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.448 02:52:17 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.448 02:52:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:01.709 [2024-11-29 02:52:17.498581] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:01.709 [2024-11-29 02:52:17.498698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69644 ] 00:05:01.970 [2024-11-29 02:52:17.801531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.970 [2024-11-29 02:52:17.813789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.569 02:52:18 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.569 00:05:02.569 02:52:18 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:02.570 02:52:18 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:02.570 INFO: shutting down applications... 00:05:02.570 02:52:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:02.570 02:52:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:02.570 02:52:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:02.570 02:52:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:02.570 02:52:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69644 ]] 00:05:02.570 02:52:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69644 00:05:02.570 02:52:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:02.570 02:52:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.570 02:52:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69644 00:05:02.570 02:52:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.143 02:52:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.143 02:52:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.143 SPDK target shutdown done 00:05:03.143 Success 00:05:03.143 02:52:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69644 00:05:03.143 02:52:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:03.143 02:52:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:03.143 02:52:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:03.143 02:52:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:03.144 02:52:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:03.144 00:05:03.144 real 0m1.566s 00:05:03.144 user 0m1.409s 00:05:03.144 sys 0m0.343s 00:05:03.144 ************************************ 00:05:03.144 END TEST json_config_extra_key 00:05:03.144 ************************************ 00:05:03.144 02:52:18 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.144 02:52:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:03.144 02:52:18 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:03.144 02:52:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.144 02:52:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.144 02:52:18 -- common/autotest_common.sh@10 -- # set +x 00:05:03.144 ************************************ 00:05:03.144 START TEST alias_rpc 00:05:03.144 ************************************ 00:05:03.144 02:52:18 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:03.144 * Looking for test storage... 00:05:03.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:03.144 02:52:18 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.144 02:52:18 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.144 02:52:18 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.144 02:52:19 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:03.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.144 02:52:19 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:03.144 02:52:19 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.144 02:52:19 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.144 --rc genhtml_branch_coverage=1 00:05:03.144 --rc genhtml_function_coverage=1 00:05:03.144 --rc genhtml_legend=1 00:05:03.144 --rc geninfo_all_blocks=1 00:05:03.144 --rc geninfo_unexecuted_blocks=1 00:05:03.144 00:05:03.144 ' 00:05:03.144 02:52:19 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.144 --rc genhtml_branch_coverage=1 00:05:03.144 --rc genhtml_function_coverage=1 00:05:03.144 --rc genhtml_legend=1 00:05:03.144 --rc geninfo_all_blocks=1 00:05:03.144 --rc geninfo_unexecuted_blocks=1 00:05:03.144 00:05:03.144 ' 00:05:03.144 02:52:19 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.144 --rc genhtml_branch_coverage=1 00:05:03.144 --rc genhtml_function_coverage=1 00:05:03.144 --rc genhtml_legend=1 00:05:03.144 --rc geninfo_all_blocks=1 00:05:03.144 --rc geninfo_unexecuted_blocks=1 00:05:03.144 00:05:03.144 ' 00:05:03.144 02:52:19 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.144 --rc genhtml_branch_coverage=1 00:05:03.144 --rc genhtml_function_coverage=1 00:05:03.144 --rc genhtml_legend=1 00:05:03.144 --rc geninfo_all_blocks=1 00:05:03.144 --rc geninfo_unexecuted_blocks=1 00:05:03.144 00:05:03.144 ' 00:05:03.144 02:52:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:03.144 02:52:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69718 00:05:03.144 02:52:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69718 00:05:03.144 02:52:19 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 69718 ']' 00:05:03.144 02:52:19 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.144 02:52:19 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.144 02:52:19 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.144 02:52:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.144 02:52:19 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.144 02:52:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.406 [2024-11-29 02:52:19.158520] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:03.406 [2024-11-29 02:52:19.159289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69718 ] 00:05:03.406 [2024-11-29 02:52:19.306668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.406 [2024-11-29 02:52:19.335991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.351 02:52:20 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.351 02:52:20 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:04.351 02:52:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:04.351 02:52:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69718 00:05:04.351 02:52:20 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 69718 ']' 00:05:04.351 02:52:20 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 69718 00:05:04.351 02:52:20 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:04.351 02:52:20 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.351 02:52:20 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69718 00:05:04.351 killing process with pid 69718 00:05:04.351 02:52:20 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.351 02:52:20 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.351 02:52:20 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69718' 00:05:04.351 02:52:20 alias_rpc -- common/autotest_common.sh@973 -- # kill 69718 00:05:04.351 02:52:20 alias_rpc -- common/autotest_common.sh@978 -- # wait 69718 00:05:04.613 ************************************ 00:05:04.613 END TEST alias_rpc 00:05:04.613 ************************************ 00:05:04.613 00:05:04.613 real 0m1.665s 00:05:04.613 user 0m1.733s 00:05:04.613 sys 0m0.476s 00:05:04.613 02:52:20 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.613 02:52:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.875 02:52:20 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:04.875 02:52:20 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:04.875 02:52:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.875 02:52:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.875 02:52:20 -- common/autotest_common.sh@10 -- # set +x 00:05:04.875 ************************************ 00:05:04.875 START TEST spdkcli_tcp 00:05:04.875 ************************************ 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:04.875 * Looking for test storage... 00:05:04.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.875 02:52:20 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.875 --rc genhtml_branch_coverage=1 00:05:04.875 --rc genhtml_function_coverage=1 00:05:04.875 --rc genhtml_legend=1 00:05:04.875 --rc geninfo_all_blocks=1 00:05:04.875 --rc geninfo_unexecuted_blocks=1 00:05:04.875 00:05:04.875 ' 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.875 --rc genhtml_branch_coverage=1 00:05:04.875 --rc genhtml_function_coverage=1 00:05:04.875 --rc genhtml_legend=1 00:05:04.875 --rc geninfo_all_blocks=1 00:05:04.875 --rc geninfo_unexecuted_blocks=1 00:05:04.875 00:05:04.875 ' 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.875 --rc genhtml_branch_coverage=1 00:05:04.875 --rc genhtml_function_coverage=1 00:05:04.875 --rc genhtml_legend=1 00:05:04.875 --rc geninfo_all_blocks=1 00:05:04.875 --rc geninfo_unexecuted_blocks=1 00:05:04.875 00:05:04.875 ' 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.875 --rc genhtml_branch_coverage=1 00:05:04.875 --rc genhtml_function_coverage=1 00:05:04.875 --rc genhtml_legend=1 00:05:04.875 --rc geninfo_all_blocks=1 00:05:04.875 --rc geninfo_unexecuted_blocks=1 00:05:04.875 00:05:04.875 ' 00:05:04.875 02:52:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:04.875 02:52:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:04.875 02:52:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:04.875 02:52:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:04.875 02:52:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:04.875 02:52:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:04.875 02:52:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.875 02:52:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69797 00:05:04.875 02:52:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69797 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 69797 ']' 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.875 02:52:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:04.875 02:52:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.136 [2024-11-29 02:52:20.905903] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:05.136 [2024-11-29 02:52:20.906112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69797 ] 00:05:05.136 [2024-11-29 02:52:21.050221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.136 [2024-11-29 02:52:21.082314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.136 [2024-11-29 02:52:21.082374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.077 02:52:21 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.077 02:52:21 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:06.077 02:52:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69814 00:05:06.077 02:52:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:06.077 02:52:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:06.077 [ 00:05:06.077 "bdev_malloc_delete", 00:05:06.077 "bdev_malloc_create", 00:05:06.077 "bdev_null_resize", 00:05:06.077 "bdev_null_delete", 00:05:06.077 "bdev_null_create", 00:05:06.077 "bdev_nvme_cuse_unregister", 00:05:06.077 "bdev_nvme_cuse_register", 00:05:06.077 "bdev_opal_new_user", 00:05:06.077 "bdev_opal_set_lock_state", 00:05:06.077 "bdev_opal_delete", 00:05:06.077 "bdev_opal_get_info", 00:05:06.077 "bdev_opal_create", 00:05:06.077 "bdev_nvme_opal_revert", 00:05:06.077 "bdev_nvme_opal_init", 00:05:06.077 "bdev_nvme_send_cmd", 00:05:06.077 "bdev_nvme_set_keys", 00:05:06.077 "bdev_nvme_get_path_iostat", 00:05:06.077 "bdev_nvme_get_mdns_discovery_info", 00:05:06.077 "bdev_nvme_stop_mdns_discovery", 00:05:06.077 "bdev_nvme_start_mdns_discovery", 00:05:06.077 "bdev_nvme_set_multipath_policy", 00:05:06.077 "bdev_nvme_set_preferred_path", 00:05:06.077 "bdev_nvme_get_io_paths", 00:05:06.077 "bdev_nvme_remove_error_injection", 00:05:06.077 "bdev_nvme_add_error_injection", 00:05:06.077 "bdev_nvme_get_discovery_info", 00:05:06.077 "bdev_nvme_stop_discovery", 00:05:06.077 "bdev_nvme_start_discovery", 00:05:06.077 "bdev_nvme_get_controller_health_info", 00:05:06.077 "bdev_nvme_disable_controller", 00:05:06.077 "bdev_nvme_enable_controller", 00:05:06.077 "bdev_nvme_reset_controller", 00:05:06.077 "bdev_nvme_get_transport_statistics", 00:05:06.077 "bdev_nvme_apply_firmware", 00:05:06.077 "bdev_nvme_detach_controller", 00:05:06.077 "bdev_nvme_get_controllers", 00:05:06.077 "bdev_nvme_attach_controller", 00:05:06.077 "bdev_nvme_set_hotplug", 00:05:06.077 "bdev_nvme_set_options", 00:05:06.077 "bdev_passthru_delete", 00:05:06.077 "bdev_passthru_create", 00:05:06.077 "bdev_lvol_set_parent_bdev", 00:05:06.077 "bdev_lvol_set_parent", 00:05:06.077 "bdev_lvol_check_shallow_copy", 00:05:06.077 "bdev_lvol_start_shallow_copy", 00:05:06.077 "bdev_lvol_grow_lvstore", 00:05:06.077 "bdev_lvol_get_lvols", 00:05:06.077 "bdev_lvol_get_lvstores", 00:05:06.077 "bdev_lvol_delete", 00:05:06.077 "bdev_lvol_set_read_only", 00:05:06.077 "bdev_lvol_resize", 00:05:06.077 "bdev_lvol_decouple_parent", 00:05:06.077 "bdev_lvol_inflate", 00:05:06.077 "bdev_lvol_rename", 00:05:06.077 "bdev_lvol_clone_bdev", 00:05:06.077 "bdev_lvol_clone", 00:05:06.077 "bdev_lvol_snapshot", 00:05:06.077 "bdev_lvol_create", 00:05:06.077 "bdev_lvol_delete_lvstore", 00:05:06.077 "bdev_lvol_rename_lvstore", 00:05:06.077 "bdev_lvol_create_lvstore", 00:05:06.077 "bdev_raid_set_options", 00:05:06.077 "bdev_raid_remove_base_bdev", 00:05:06.077 "bdev_raid_add_base_bdev", 00:05:06.077 "bdev_raid_delete", 00:05:06.077 "bdev_raid_create", 00:05:06.077 "bdev_raid_get_bdevs", 00:05:06.077 "bdev_error_inject_error", 00:05:06.077 "bdev_error_delete", 00:05:06.077 "bdev_error_create", 00:05:06.077 "bdev_split_delete", 00:05:06.077 "bdev_split_create", 00:05:06.077 "bdev_delay_delete", 00:05:06.077 "bdev_delay_create", 00:05:06.077 "bdev_delay_update_latency", 00:05:06.077 "bdev_zone_block_delete", 00:05:06.077 "bdev_zone_block_create", 00:05:06.077 "blobfs_create", 00:05:06.077 "blobfs_detect", 00:05:06.077 "blobfs_set_cache_size", 00:05:06.077 "bdev_xnvme_delete", 00:05:06.077 "bdev_xnvme_create", 00:05:06.077 "bdev_aio_delete", 00:05:06.077 "bdev_aio_rescan", 00:05:06.077 "bdev_aio_create", 00:05:06.077 "bdev_ftl_set_property", 00:05:06.077 "bdev_ftl_get_properties", 00:05:06.077 "bdev_ftl_get_stats", 00:05:06.077 "bdev_ftl_unmap", 00:05:06.077 "bdev_ftl_unload", 00:05:06.077 "bdev_ftl_delete", 00:05:06.077 "bdev_ftl_load", 00:05:06.077 "bdev_ftl_create", 00:05:06.077 "bdev_virtio_attach_controller", 00:05:06.077 "bdev_virtio_scsi_get_devices", 00:05:06.077 "bdev_virtio_detach_controller", 00:05:06.077 "bdev_virtio_blk_set_hotplug", 00:05:06.077 "bdev_iscsi_delete", 00:05:06.077 "bdev_iscsi_create", 00:05:06.077 "bdev_iscsi_set_options", 00:05:06.077 "accel_error_inject_error", 00:05:06.077 "ioat_scan_accel_module", 00:05:06.077 "dsa_scan_accel_module", 00:05:06.077 "iaa_scan_accel_module", 00:05:06.077 "keyring_file_remove_key", 00:05:06.077 "keyring_file_add_key", 00:05:06.077 "keyring_linux_set_options", 00:05:06.077 "fsdev_aio_delete", 00:05:06.078 "fsdev_aio_create", 00:05:06.078 "iscsi_get_histogram", 00:05:06.078 "iscsi_enable_histogram", 00:05:06.078 "iscsi_set_options", 00:05:06.078 "iscsi_get_auth_groups", 00:05:06.078 "iscsi_auth_group_remove_secret", 00:05:06.078 "iscsi_auth_group_add_secret", 00:05:06.078 "iscsi_delete_auth_group", 00:05:06.078 "iscsi_create_auth_group", 00:05:06.078 "iscsi_set_discovery_auth", 00:05:06.078 "iscsi_get_options", 00:05:06.078 "iscsi_target_node_request_logout", 00:05:06.078 "iscsi_target_node_set_redirect", 00:05:06.078 "iscsi_target_node_set_auth", 00:05:06.078 "iscsi_target_node_add_lun", 00:05:06.078 "iscsi_get_stats", 00:05:06.078 "iscsi_get_connections", 00:05:06.078 "iscsi_portal_group_set_auth", 00:05:06.078 "iscsi_start_portal_group", 00:05:06.078 "iscsi_delete_portal_group", 00:05:06.078 "iscsi_create_portal_group", 00:05:06.078 "iscsi_get_portal_groups", 00:05:06.078 "iscsi_delete_target_node", 00:05:06.078 "iscsi_target_node_remove_pg_ig_maps", 00:05:06.078 "iscsi_target_node_add_pg_ig_maps", 00:05:06.078 "iscsi_create_target_node", 00:05:06.078 "iscsi_get_target_nodes", 00:05:06.078 "iscsi_delete_initiator_group", 00:05:06.078 "iscsi_initiator_group_remove_initiators", 00:05:06.078 "iscsi_initiator_group_add_initiators", 00:05:06.078 "iscsi_create_initiator_group", 00:05:06.078 "iscsi_get_initiator_groups", 00:05:06.078 "nvmf_set_crdt", 00:05:06.078 "nvmf_set_config", 00:05:06.078 "nvmf_set_max_subsystems", 00:05:06.078 "nvmf_stop_mdns_prr", 00:05:06.078 "nvmf_publish_mdns_prr", 00:05:06.078 "nvmf_subsystem_get_listeners", 00:05:06.078 "nvmf_subsystem_get_qpairs", 00:05:06.078 "nvmf_subsystem_get_controllers", 00:05:06.078 "nvmf_get_stats", 00:05:06.078 "nvmf_get_transports", 00:05:06.078 "nvmf_create_transport", 00:05:06.078 "nvmf_get_targets", 00:05:06.078 "nvmf_delete_target", 00:05:06.078 "nvmf_create_target", 00:05:06.078 "nvmf_subsystem_allow_any_host", 00:05:06.078 "nvmf_subsystem_set_keys", 00:05:06.078 "nvmf_subsystem_remove_host", 00:05:06.078 "nvmf_subsystem_add_host", 00:05:06.078 "nvmf_ns_remove_host", 00:05:06.078 "nvmf_ns_add_host", 00:05:06.078 "nvmf_subsystem_remove_ns", 00:05:06.078 "nvmf_subsystem_set_ns_ana_group", 00:05:06.078 "nvmf_subsystem_add_ns", 00:05:06.078 "nvmf_subsystem_listener_set_ana_state", 00:05:06.078 "nvmf_discovery_get_referrals", 00:05:06.078 "nvmf_discovery_remove_referral", 00:05:06.078 "nvmf_discovery_add_referral", 00:05:06.078 "nvmf_subsystem_remove_listener", 00:05:06.078 "nvmf_subsystem_add_listener", 00:05:06.078 "nvmf_delete_subsystem", 00:05:06.078 "nvmf_create_subsystem", 00:05:06.078 "nvmf_get_subsystems", 00:05:06.078 "env_dpdk_get_mem_stats", 00:05:06.078 "nbd_get_disks", 00:05:06.078 "nbd_stop_disk", 00:05:06.078 "nbd_start_disk", 00:05:06.078 "ublk_recover_disk", 00:05:06.078 "ublk_get_disks", 00:05:06.078 "ublk_stop_disk", 00:05:06.078 "ublk_start_disk", 00:05:06.078 "ublk_destroy_target", 00:05:06.078 "ublk_create_target", 00:05:06.078 "virtio_blk_create_transport", 00:05:06.078 "virtio_blk_get_transports", 00:05:06.078 "vhost_controller_set_coalescing", 00:05:06.078 "vhost_get_controllers", 00:05:06.078 "vhost_delete_controller", 00:05:06.078 "vhost_create_blk_controller", 00:05:06.078 "vhost_scsi_controller_remove_target", 00:05:06.078 "vhost_scsi_controller_add_target", 00:05:06.078 "vhost_start_scsi_controller", 00:05:06.078 "vhost_create_scsi_controller", 00:05:06.078 "thread_set_cpumask", 00:05:06.078 "scheduler_set_options", 00:05:06.078 "framework_get_governor", 00:05:06.078 "framework_get_scheduler", 00:05:06.078 "framework_set_scheduler", 00:05:06.078 "framework_get_reactors", 00:05:06.078 "thread_get_io_channels", 00:05:06.078 "thread_get_pollers", 00:05:06.078 "thread_get_stats", 00:05:06.078 "framework_monitor_context_switch", 00:05:06.078 "spdk_kill_instance", 00:05:06.078 "log_enable_timestamps", 00:05:06.078 "log_get_flags", 00:05:06.078 "log_clear_flag", 00:05:06.078 "log_set_flag", 00:05:06.078 "log_get_level", 00:05:06.078 "log_set_level", 00:05:06.078 "log_get_print_level", 00:05:06.078 "log_set_print_level", 00:05:06.078 "framework_enable_cpumask_locks", 00:05:06.078 "framework_disable_cpumask_locks", 00:05:06.078 "framework_wait_init", 00:05:06.078 "framework_start_init", 00:05:06.078 "scsi_get_devices", 00:05:06.078 "bdev_get_histogram", 00:05:06.078 "bdev_enable_histogram", 00:05:06.078 "bdev_set_qos_limit", 00:05:06.078 "bdev_set_qd_sampling_period", 00:05:06.078 "bdev_get_bdevs", 00:05:06.078 "bdev_reset_iostat", 00:05:06.078 "bdev_get_iostat", 00:05:06.078 "bdev_examine", 00:05:06.078 "bdev_wait_for_examine", 00:05:06.078 "bdev_set_options", 00:05:06.078 "accel_get_stats", 00:05:06.078 "accel_set_options", 00:05:06.078 "accel_set_driver", 00:05:06.078 "accel_crypto_key_destroy", 00:05:06.078 "accel_crypto_keys_get", 00:05:06.078 "accel_crypto_key_create", 00:05:06.078 "accel_assign_opc", 00:05:06.078 "accel_get_module_info", 00:05:06.078 "accel_get_opc_assignments", 00:05:06.078 "vmd_rescan", 00:05:06.078 "vmd_remove_device", 00:05:06.078 "vmd_enable", 00:05:06.078 "sock_get_default_impl", 00:05:06.078 "sock_set_default_impl", 00:05:06.078 "sock_impl_set_options", 00:05:06.078 "sock_impl_get_options", 00:05:06.078 "iobuf_get_stats", 00:05:06.078 "iobuf_set_options", 00:05:06.078 "keyring_get_keys", 00:05:06.078 "framework_get_pci_devices", 00:05:06.078 "framework_get_config", 00:05:06.078 "framework_get_subsystems", 00:05:06.078 "fsdev_set_opts", 00:05:06.078 "fsdev_get_opts", 00:05:06.078 "trace_get_info", 00:05:06.078 "trace_get_tpoint_group_mask", 00:05:06.078 "trace_disable_tpoint_group", 00:05:06.078 "trace_enable_tpoint_group", 00:05:06.078 "trace_clear_tpoint_mask", 00:05:06.078 "trace_set_tpoint_mask", 00:05:06.078 "notify_get_notifications", 00:05:06.078 "notify_get_types", 00:05:06.078 "spdk_get_version", 00:05:06.078 "rpc_get_methods" 00:05:06.078 ] 00:05:06.078 02:52:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:06.078 02:52:21 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.078 02:52:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.078 02:52:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:06.078 02:52:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69797 00:05:06.078 02:52:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 69797 ']' 00:05:06.078 02:52:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 69797 00:05:06.078 02:52:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:06.078 02:52:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.078 02:52:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69797 00:05:06.078 02:52:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.078 02:52:22 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.078 killing process with pid 69797 00:05:06.078 02:52:22 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69797' 00:05:06.078 02:52:22 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 69797 00:05:06.078 02:52:22 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 69797 00:05:06.650 ************************************ 00:05:06.650 END TEST spdkcli_tcp 00:05:06.650 ************************************ 00:05:06.650 00:05:06.650 real 0m1.731s 00:05:06.650 user 0m3.009s 00:05:06.650 sys 0m0.493s 00:05:06.650 02:52:22 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.650 02:52:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.651 02:52:22 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.651 02:52:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.651 02:52:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.651 02:52:22 -- common/autotest_common.sh@10 -- # set +x 00:05:06.651 ************************************ 00:05:06.651 START TEST dpdk_mem_utility 00:05:06.651 ************************************ 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.651 * Looking for test storage... 00:05:06.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.651 02:52:22 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.651 --rc genhtml_branch_coverage=1 00:05:06.651 --rc genhtml_function_coverage=1 00:05:06.651 --rc genhtml_legend=1 00:05:06.651 --rc geninfo_all_blocks=1 00:05:06.651 --rc geninfo_unexecuted_blocks=1 00:05:06.651 00:05:06.651 ' 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.651 --rc genhtml_branch_coverage=1 00:05:06.651 --rc genhtml_function_coverage=1 00:05:06.651 --rc genhtml_legend=1 00:05:06.651 --rc geninfo_all_blocks=1 00:05:06.651 --rc geninfo_unexecuted_blocks=1 00:05:06.651 00:05:06.651 ' 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.651 --rc genhtml_branch_coverage=1 00:05:06.651 --rc genhtml_function_coverage=1 00:05:06.651 --rc genhtml_legend=1 00:05:06.651 --rc geninfo_all_blocks=1 00:05:06.651 --rc geninfo_unexecuted_blocks=1 00:05:06.651 00:05:06.651 ' 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.651 --rc genhtml_branch_coverage=1 00:05:06.651 --rc genhtml_function_coverage=1 00:05:06.651 --rc genhtml_legend=1 00:05:06.651 --rc geninfo_all_blocks=1 00:05:06.651 --rc geninfo_unexecuted_blocks=1 00:05:06.651 00:05:06.651 ' 00:05:06.651 02:52:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:06.651 02:52:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69897 00:05:06.651 02:52:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69897 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 69897 ']' 00:05:06.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.651 02:52:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.651 02:52:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:06.912 [2024-11-29 02:52:22.683047] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:06.913 [2024-11-29 02:52:22.683204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69897 ] 00:05:06.913 [2024-11-29 02:52:22.832633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.913 [2024-11-29 02:52:22.855882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.857 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.857 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:07.857 02:52:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:07.857 02:52:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:07.857 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.857 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.857 { 00:05:07.857 "filename": "/tmp/spdk_mem_dump.txt" 00:05:07.857 } 00:05:07.857 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.857 02:52:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:07.857 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:07.857 1 heaps totaling size 818.000000 MiB 00:05:07.857 size: 818.000000 MiB heap id: 0 00:05:07.857 end heaps---------- 00:05:07.857 9 mempools totaling size 603.782043 MiB 00:05:07.857 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:07.857 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:07.857 size: 100.555481 MiB name: bdev_io_69897 00:05:07.857 size: 50.003479 MiB name: msgpool_69897 00:05:07.857 size: 36.509338 MiB name: fsdev_io_69897 00:05:07.857 size: 21.763794 MiB name: PDU_Pool 00:05:07.857 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:07.857 size: 4.133484 MiB name: evtpool_69897 00:05:07.857 size: 0.026123 MiB name: Session_Pool 00:05:07.857 end mempools------- 00:05:07.857 6 memzones totaling size 4.142822 MiB 00:05:07.857 size: 1.000366 MiB name: RG_ring_0_69897 00:05:07.857 size: 1.000366 MiB name: RG_ring_1_69897 00:05:07.857 size: 1.000366 MiB name: RG_ring_4_69897 00:05:07.857 size: 1.000366 MiB name: RG_ring_5_69897 00:05:07.857 size: 0.125366 MiB name: RG_ring_2_69897 00:05:07.857 size: 0.015991 MiB name: RG_ring_3_69897 00:05:07.857 end memzones------- 00:05:07.857 02:52:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:07.857 heap id: 0 total size: 818.000000 MiB number of busy elements: 338 number of free elements: 15 00:05:07.857 list of free elements. size: 10.798645 MiB 00:05:07.857 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:07.857 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:07.857 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:07.857 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:07.857 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:07.857 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:07.857 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:07.857 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:07.857 element at address: 0x20001ae00000 with size: 0.563660 MiB 00:05:07.857 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:07.857 element at address: 0x200000c00000 with size: 0.486450 MiB 00:05:07.857 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:07.857 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:07.857 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:07.857 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:07.857 list of standard malloc elements. size: 199.272461 MiB 00:05:07.857 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:07.857 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:07.857 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:07.857 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:07.857 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:07.857 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:07.857 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:07.857 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:07.857 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:07.857 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:07.857 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:07.857 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:07.858 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:07.858 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:07.858 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:07.858 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae904c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae90580 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae90640 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae90700 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae907c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae90880 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae90940 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae90a00 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae90ac0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae90b80 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae90c40 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae90d00 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae90dc0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae90e80 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae90f40 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91000 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae910c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91180 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91240 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91300 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae913c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91480 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:07.858 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:07.859 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:07.859 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:07.859 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:07.859 list of memzone associated elements. size: 607.928894 MiB 00:05:07.859 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:07.859 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:07.859 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:07.859 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:07.859 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:07.859 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_69897_0 00:05:07.859 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:07.859 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69897_0 00:05:07.859 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:07.859 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69897_0 00:05:07.859 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:07.859 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:07.859 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:07.859 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:07.859 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:07.859 associated memzone info: size: 3.000122 MiB name: MP_evtpool_69897_0 00:05:07.859 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:07.859 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69897 00:05:07.859 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:07.859 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69897 00:05:07.859 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:07.859 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:07.859 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:07.859 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:07.859 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:07.859 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:07.859 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:07.859 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:07.859 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:07.859 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69897 00:05:07.859 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:07.859 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69897 00:05:07.859 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:07.859 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69897 00:05:07.859 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:07.859 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69897 00:05:07.859 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:07.859 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69897 00:05:07.859 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:07.859 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69897 00:05:07.859 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:07.859 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:07.859 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:07.859 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:07.859 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:07.859 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:07.859 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:07.859 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_69897 00:05:07.859 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:07.859 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69897 00:05:07.859 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:07.859 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:07.859 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:07.859 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:07.859 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:07.859 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69897 00:05:07.859 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:07.859 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:07.860 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:07.860 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69897 00:05:07.860 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:07.860 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69897 00:05:07.860 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:07.860 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69897 00:05:07.860 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:07.860 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:07.860 02:52:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:07.860 02:52:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69897 00:05:07.860 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 69897 ']' 00:05:07.860 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 69897 00:05:07.860 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:07.860 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.860 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69897 00:05:07.860 killing process with pid 69897 00:05:07.860 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.860 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.860 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69897' 00:05:07.860 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 69897 00:05:07.860 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 69897 00:05:08.121 00:05:08.121 real 0m1.536s 00:05:08.121 user 0m1.553s 00:05:08.121 sys 0m0.408s 00:05:08.121 ************************************ 00:05:08.121 END TEST dpdk_mem_utility 00:05:08.121 ************************************ 00:05:08.121 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.121 02:52:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:08.121 02:52:24 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:08.121 02:52:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.121 02:52:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.121 02:52:24 -- common/autotest_common.sh@10 -- # set +x 00:05:08.121 ************************************ 00:05:08.121 START TEST event 00:05:08.121 ************************************ 00:05:08.121 02:52:24 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:08.121 * Looking for test storage... 00:05:08.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:08.121 02:52:24 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.121 02:52:24 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.121 02:52:24 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.385 02:52:24 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.385 02:52:24 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.385 02:52:24 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.385 02:52:24 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.385 02:52:24 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.386 02:52:24 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.386 02:52:24 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.386 02:52:24 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.386 02:52:24 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.386 02:52:24 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.386 02:52:24 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.386 02:52:24 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.386 02:52:24 event -- scripts/common.sh@344 -- # case "$op" in 00:05:08.386 02:52:24 event -- scripts/common.sh@345 -- # : 1 00:05:08.386 02:52:24 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.386 02:52:24 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.386 02:52:24 event -- scripts/common.sh@365 -- # decimal 1 00:05:08.386 02:52:24 event -- scripts/common.sh@353 -- # local d=1 00:05:08.386 02:52:24 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.386 02:52:24 event -- scripts/common.sh@355 -- # echo 1 00:05:08.386 02:52:24 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.386 02:52:24 event -- scripts/common.sh@366 -- # decimal 2 00:05:08.386 02:52:24 event -- scripts/common.sh@353 -- # local d=2 00:05:08.386 02:52:24 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.386 02:52:24 event -- scripts/common.sh@355 -- # echo 2 00:05:08.386 02:52:24 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.386 02:52:24 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.386 02:52:24 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.386 02:52:24 event -- scripts/common.sh@368 -- # return 0 00:05:08.386 02:52:24 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.386 02:52:24 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.386 --rc genhtml_branch_coverage=1 00:05:08.386 --rc genhtml_function_coverage=1 00:05:08.386 --rc genhtml_legend=1 00:05:08.386 --rc geninfo_all_blocks=1 00:05:08.386 --rc geninfo_unexecuted_blocks=1 00:05:08.386 00:05:08.386 ' 00:05:08.386 02:52:24 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.386 --rc genhtml_branch_coverage=1 00:05:08.386 --rc genhtml_function_coverage=1 00:05:08.386 --rc genhtml_legend=1 00:05:08.386 --rc geninfo_all_blocks=1 00:05:08.386 --rc geninfo_unexecuted_blocks=1 00:05:08.386 00:05:08.386 ' 00:05:08.386 02:52:24 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.386 --rc genhtml_branch_coverage=1 00:05:08.386 --rc genhtml_function_coverage=1 00:05:08.386 --rc genhtml_legend=1 00:05:08.386 --rc geninfo_all_blocks=1 00:05:08.386 --rc geninfo_unexecuted_blocks=1 00:05:08.386 00:05:08.386 ' 00:05:08.386 02:52:24 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.386 --rc genhtml_branch_coverage=1 00:05:08.386 --rc genhtml_function_coverage=1 00:05:08.386 --rc genhtml_legend=1 00:05:08.386 --rc geninfo_all_blocks=1 00:05:08.386 --rc geninfo_unexecuted_blocks=1 00:05:08.386 00:05:08.386 ' 00:05:08.386 02:52:24 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:08.386 02:52:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:08.386 02:52:24 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.386 02:52:24 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:08.386 02:52:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.386 02:52:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.386 ************************************ 00:05:08.386 START TEST event_perf 00:05:08.386 ************************************ 00:05:08.386 02:52:24 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.386 Running I/O for 1 seconds...[2024-11-29 02:52:24.217960] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:08.386 [2024-11-29 02:52:24.218236] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69978 ] 00:05:08.386 [2024-11-29 02:52:24.367075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.648 [2024-11-29 02:52:24.400080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.648 [2024-11-29 02:52:24.400412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.648 [2024-11-29 02:52:24.400679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.648 [2024-11-29 02:52:24.400748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.590 Running I/O for 1 seconds... 00:05:09.590 lcore 0: 135214 00:05:09.590 lcore 1: 135212 00:05:09.590 lcore 2: 135214 00:05:09.590 lcore 3: 135214 00:05:09.590 done. 00:05:09.590 00:05:09.590 ************************************ 00:05:09.590 END TEST event_perf 00:05:09.590 ************************************ 00:05:09.590 real 0m1.272s 00:05:09.590 user 0m4.056s 00:05:09.590 sys 0m0.090s 00:05:09.590 02:52:25 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.590 02:52:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.590 02:52:25 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:09.590 02:52:25 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:09.590 02:52:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.591 02:52:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.591 ************************************ 00:05:09.591 START TEST event_reactor 00:05:09.591 ************************************ 00:05:09.591 02:52:25 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:09.591 [2024-11-29 02:52:25.562707] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:09.591 [2024-11-29 02:52:25.562864] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70012 ] 00:05:09.851 [2024-11-29 02:52:25.709952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.851 [2024-11-29 02:52:25.737990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.784 test_start 00:05:10.784 oneshot 00:05:10.784 tick 100 00:05:10.784 tick 100 00:05:10.784 tick 250 00:05:10.784 tick 100 00:05:10.784 tick 100 00:05:10.784 tick 100 00:05:10.784 tick 250 00:05:10.784 tick 500 00:05:10.784 tick 100 00:05:10.784 tick 100 00:05:10.784 tick 250 00:05:10.784 tick 100 00:05:10.784 tick 100 00:05:10.784 test_end 00:05:10.784 ************************************ 00:05:10.784 END TEST event_reactor 00:05:10.784 ************************************ 00:05:10.784 00:05:10.784 real 0m1.235s 00:05:10.784 user 0m1.071s 00:05:10.784 sys 0m0.056s 00:05:10.784 02:52:26 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.784 02:52:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:11.041 02:52:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.041 02:52:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:11.041 02:52:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.041 02:52:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.041 ************************************ 00:05:11.041 START TEST event_reactor_perf 00:05:11.042 ************************************ 00:05:11.042 02:52:26 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.042 [2024-11-29 02:52:26.832497] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:11.042 [2024-11-29 02:52:26.832712] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70048 ] 00:05:11.042 [2024-11-29 02:52:26.975376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.042 [2024-11-29 02:52:26.992480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.415 test_start 00:05:12.415 test_end 00:05:12.415 Performance: 315462 events per second 00:05:12.415 00:05:12.415 real 0m1.221s 00:05:12.415 user 0m1.063s 00:05:12.415 sys 0m0.051s 00:05:12.415 ************************************ 00:05:12.415 END TEST event_reactor_perf 00:05:12.415 ************************************ 00:05:12.415 02:52:28 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.415 02:52:28 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:12.415 02:52:28 event -- event/event.sh@49 -- # uname -s 00:05:12.415 02:52:28 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:12.415 02:52:28 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:12.415 02:52:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.415 02:52:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.415 02:52:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.415 ************************************ 00:05:12.415 START TEST event_scheduler 00:05:12.415 ************************************ 00:05:12.415 02:52:28 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:12.415 * Looking for test storage... 00:05:12.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:12.415 02:52:28 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.415 02:52:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.415 02:52:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.415 02:52:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.415 02:52:28 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.415 02:52:28 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.415 02:52:28 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.415 02:52:28 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.415 02:52:28 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.415 02:52:28 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.415 02:52:28 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.415 02:52:28 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:12.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.416 02:52:28 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:12.416 02:52:28 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.416 02:52:28 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.416 --rc genhtml_branch_coverage=1 00:05:12.416 --rc genhtml_function_coverage=1 00:05:12.416 --rc genhtml_legend=1 00:05:12.416 --rc geninfo_all_blocks=1 00:05:12.416 --rc geninfo_unexecuted_blocks=1 00:05:12.416 00:05:12.416 ' 00:05:12.416 02:52:28 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.416 --rc genhtml_branch_coverage=1 00:05:12.416 --rc genhtml_function_coverage=1 00:05:12.416 --rc genhtml_legend=1 00:05:12.416 --rc geninfo_all_blocks=1 00:05:12.416 --rc geninfo_unexecuted_blocks=1 00:05:12.416 00:05:12.416 ' 00:05:12.416 02:52:28 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.416 --rc genhtml_branch_coverage=1 00:05:12.416 --rc genhtml_function_coverage=1 00:05:12.416 --rc genhtml_legend=1 00:05:12.416 --rc geninfo_all_blocks=1 00:05:12.416 --rc geninfo_unexecuted_blocks=1 00:05:12.416 00:05:12.416 ' 00:05:12.416 02:52:28 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.416 --rc genhtml_branch_coverage=1 00:05:12.416 --rc genhtml_function_coverage=1 00:05:12.416 --rc genhtml_legend=1 00:05:12.416 --rc geninfo_all_blocks=1 00:05:12.416 --rc geninfo_unexecuted_blocks=1 00:05:12.416 00:05:12.416 ' 00:05:12.416 02:52:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:12.416 02:52:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70119 00:05:12.416 02:52:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.416 02:52:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70119 00:05:12.416 02:52:28 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70119 ']' 00:05:12.416 02:52:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:12.416 02:52:28 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.416 02:52:28 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.416 02:52:28 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.416 02:52:28 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.416 02:52:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.416 [2024-11-29 02:52:28.282256] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:12.416 [2024-11-29 02:52:28.282548] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70119 ] 00:05:12.675 [2024-11-29 02:52:28.427823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.675 [2024-11-29 02:52:28.449691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.675 [2024-11-29 02:52:28.449993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.675 [2024-11-29 02:52:28.450118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.675 [2024-11-29 02:52:28.450037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.242 02:52:29 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.242 02:52:29 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:13.242 02:52:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:13.242 02:52:29 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.242 02:52:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.242 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.242 POWER: Cannot set governor of lcore 0 to userspace 00:05:13.242 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.242 POWER: Cannot set governor of lcore 0 to performance 00:05:13.242 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.242 POWER: Cannot set governor of lcore 0 to userspace 00:05:13.242 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:13.242 POWER: Unable to set Power Management Environment for lcore 0 00:05:13.242 [2024-11-29 02:52:29.140111] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:13.242 [2024-11-29 02:52:29.140143] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:13.242 [2024-11-29 02:52:29.140212] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:13.242 [2024-11-29 02:52:29.140244] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:13.242 [2024-11-29 02:52:29.140263] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:13.242 [2024-11-29 02:52:29.140283] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:13.242 02:52:29 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.242 02:52:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:13.242 02:52:29 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.242 02:52:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.242 [2024-11-29 02:52:29.194340] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:13.242 02:52:29 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.242 02:52:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:13.242 02:52:29 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.242 02:52:29 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.242 02:52:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.242 ************************************ 00:05:13.242 START TEST scheduler_create_thread 00:05:13.242 ************************************ 00:05:13.242 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:13.242 02:52:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:13.242 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.242 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.242 2 00:05:13.242 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.242 02:52:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:13.242 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.242 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.242 3 00:05:13.242 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.242 02:52:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:13.242 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.242 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.501 4 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.501 5 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.501 6 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.501 7 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.501 8 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.501 9 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.501 10 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.501 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.071 ************************************ 00:05:14.071 END TEST scheduler_create_thread 00:05:14.071 ************************************ 00:05:14.071 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.071 00:05:14.071 real 0m0.592s 00:05:14.071 user 0m0.016s 00:05:14.071 sys 0m0.002s 00:05:14.071 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.071 02:52:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.071 02:52:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:14.071 02:52:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70119 00:05:14.071 02:52:29 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70119 ']' 00:05:14.071 02:52:29 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70119 00:05:14.071 02:52:29 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:14.071 02:52:29 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.071 02:52:29 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70119 00:05:14.071 killing process with pid 70119 00:05:14.071 02:52:29 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:14.071 02:52:29 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:14.071 02:52:29 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70119' 00:05:14.071 02:52:29 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70119 00:05:14.071 02:52:29 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70119 00:05:14.384 [2024-11-29 02:52:30.276357] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:14.643 00:05:14.643 real 0m2.338s 00:05:14.643 user 0m4.642s 00:05:14.643 sys 0m0.314s 00:05:14.643 ************************************ 00:05:14.643 END TEST event_scheduler 00:05:14.643 ************************************ 00:05:14.643 02:52:30 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.643 02:52:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.643 02:52:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:14.643 02:52:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:14.643 02:52:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.643 02:52:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.643 02:52:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.643 ************************************ 00:05:14.643 START TEST app_repeat 00:05:14.643 ************************************ 00:05:14.643 02:52:30 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:14.643 02:52:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.643 02:52:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.643 02:52:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:14.643 02:52:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.643 02:52:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:14.643 02:52:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:14.643 02:52:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:14.643 Process app_repeat pid: 70192 00:05:14.643 spdk_app_start Round 0 00:05:14.643 02:52:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70192 00:05:14.643 02:52:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.643 02:52:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70192' 00:05:14.643 02:52:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:14.643 02:52:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:14.643 02:52:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70192 /var/tmp/spdk-nbd.sock 00:05:14.643 02:52:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70192 ']' 00:05:14.643 02:52:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.643 02:52:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.643 02:52:30 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:14.643 02:52:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.643 02:52:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.643 02:52:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.643 [2024-11-29 02:52:30.502460] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:14.643 [2024-11-29 02:52:30.502935] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70192 ] 00:05:14.902 [2024-11-29 02:52:30.649005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.902 [2024-11-29 02:52:30.670888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.902 [2024-11-29 02:52:30.670893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.469 02:52:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.469 02:52:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:15.469 02:52:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.728 Malloc0 00:05:15.728 02:52:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.987 Malloc1 00:05:15.987 02:52:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.988 02:52:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.988 02:52:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.988 02:52:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.988 02:52:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.988 02:52:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.988 02:52:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.988 02:52:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.988 02:52:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.988 02:52:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.988 02:52:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.988 02:52:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.988 02:52:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.988 02:52:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.988 02:52:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.988 02:52:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.246 /dev/nbd0 00:05:16.247 02:52:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.247 02:52:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.247 02:52:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:16.247 02:52:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:16.247 02:52:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:16.247 02:52:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:16.247 02:52:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:16.247 02:52:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:16.247 02:52:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:16.247 02:52:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:16.247 02:52:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.247 1+0 records in 00:05:16.247 1+0 records out 00:05:16.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378849 s, 10.8 MB/s 00:05:16.247 02:52:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.247 02:52:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:16.247 02:52:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.247 02:52:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:16.247 02:52:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:16.247 02:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.247 02:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.247 02:52:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.506 /dev/nbd1 00:05:16.506 02:52:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.506 02:52:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.506 02:52:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:16.506 02:52:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:16.506 02:52:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:16.506 02:52:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:16.506 02:52:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:16.506 02:52:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:16.506 02:52:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:16.506 02:52:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:16.506 02:52:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.506 1+0 records in 00:05:16.506 1+0 records out 00:05:16.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221597 s, 18.5 MB/s 00:05:16.506 02:52:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.506 02:52:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:16.506 02:52:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.506 02:52:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:16.506 02:52:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:16.506 02:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.506 02:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.506 02:52:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.506 02:52:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.506 02:52:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.506 02:52:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.506 { 00:05:16.506 "nbd_device": "/dev/nbd0", 00:05:16.506 "bdev_name": "Malloc0" 00:05:16.506 }, 00:05:16.506 { 00:05:16.506 "nbd_device": "/dev/nbd1", 00:05:16.506 "bdev_name": "Malloc1" 00:05:16.506 } 00:05:16.506 ]' 00:05:16.506 02:52:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.506 02:52:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.506 { 00:05:16.506 "nbd_device": "/dev/nbd0", 00:05:16.506 "bdev_name": "Malloc0" 00:05:16.506 }, 00:05:16.506 { 00:05:16.506 "nbd_device": "/dev/nbd1", 00:05:16.506 "bdev_name": "Malloc1" 00:05:16.506 } 00:05:16.506 ]' 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.780 /dev/nbd1' 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.780 /dev/nbd1' 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.780 256+0 records in 00:05:16.780 256+0 records out 00:05:16.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467263 s, 224 MB/s 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.780 256+0 records in 00:05:16.780 256+0 records out 00:05:16.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207085 s, 50.6 MB/s 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.780 256+0 records in 00:05:16.780 256+0 records out 00:05:16.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189382 s, 55.4 MB/s 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.780 02:52:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.781 02:52:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.781 02:52:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.781 02:52:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:16.781 02:52:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.781 02:52:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.039 02:52:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.039 02:52:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.039 02:52:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.039 02:52:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.039 02:52:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.039 02:52:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.297 02:52:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.297 02:52:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.297 02:52:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.297 02:52:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.297 02:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.297 02:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.297 02:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:17.297 02:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.297 02:52:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.297 02:52:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.297 02:52:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.297 02:52:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.297 02:52:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.556 02:52:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.556 [2024-11-29 02:52:33.535093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.813 [2024-11-29 02:52:33.552391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.813 [2024-11-29 02:52:33.552479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.813 [2024-11-29 02:52:33.583542] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.813 [2024-11-29 02:52:33.583769] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.097 spdk_app_start Round 1 00:05:21.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.098 02:52:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.098 02:52:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:21.098 02:52:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70192 /var/tmp/spdk-nbd.sock 00:05:21.098 02:52:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70192 ']' 00:05:21.098 02:52:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.098 02:52:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.098 02:52:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.098 02:52:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.098 02:52:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.098 02:52:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.098 02:52:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:21.098 02:52:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.098 Malloc0 00:05:21.098 02:52:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.098 Malloc1 00:05:21.098 02:52:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.098 02:52:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.098 02:52:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.098 02:52:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.098 02:52:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.098 02:52:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.098 02:52:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.098 02:52:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.098 02:52:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.098 02:52:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.098 02:52:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.098 02:52:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.098 02:52:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.098 02:52:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.098 02:52:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.098 02:52:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.356 /dev/nbd0 00:05:21.356 02:52:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.356 02:52:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.356 02:52:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:21.356 02:52:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.356 02:52:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.356 02:52:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.356 02:52:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:21.356 02:52:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.356 02:52:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.356 02:52:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.356 02:52:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.356 1+0 records in 00:05:21.356 1+0 records out 00:05:21.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307452 s, 13.3 MB/s 00:05:21.356 02:52:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.356 02:52:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.356 02:52:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.356 02:52:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.356 02:52:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.356 02:52:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.356 02:52:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.356 02:52:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.615 /dev/nbd1 00:05:21.615 02:52:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.615 02:52:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.615 02:52:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:21.615 02:52:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.615 02:52:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.615 02:52:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.615 02:52:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:21.615 02:52:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.615 02:52:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.615 02:52:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.615 02:52:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.615 1+0 records in 00:05:21.615 1+0 records out 00:05:21.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245015 s, 16.7 MB/s 00:05:21.615 02:52:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.615 02:52:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.615 02:52:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.615 02:52:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.615 02:52:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.615 02:52:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.615 02:52:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.615 02:52:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.615 02:52:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.615 02:52:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.873 02:52:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:21.873 { 00:05:21.873 "nbd_device": "/dev/nbd0", 00:05:21.873 "bdev_name": "Malloc0" 00:05:21.873 }, 00:05:21.873 { 00:05:21.873 "nbd_device": "/dev/nbd1", 00:05:21.873 "bdev_name": "Malloc1" 00:05:21.873 } 00:05:21.873 ]' 00:05:21.873 02:52:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:21.873 { 00:05:21.873 "nbd_device": "/dev/nbd0", 00:05:21.873 "bdev_name": "Malloc0" 00:05:21.873 }, 00:05:21.873 { 00:05:21.874 "nbd_device": "/dev/nbd1", 00:05:21.874 "bdev_name": "Malloc1" 00:05:21.874 } 00:05:21.874 ]' 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:21.874 /dev/nbd1' 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:21.874 /dev/nbd1' 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:21.874 256+0 records in 00:05:21.874 256+0 records out 00:05:21.874 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100119 s, 105 MB/s 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:21.874 256+0 records in 00:05:21.874 256+0 records out 00:05:21.874 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150219 s, 69.8 MB/s 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:21.874 256+0 records in 00:05:21.874 256+0 records out 00:05:21.874 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170317 s, 61.6 MB/s 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.874 02:52:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.132 02:52:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.132 02:52:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.132 02:52:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.132 02:52:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.132 02:52:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.132 02:52:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.132 02:52:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.132 02:52:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.132 02:52:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.132 02:52:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.132 02:52:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.132 02:52:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.132 02:52:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.132 02:52:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.132 02:52:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.132 02:52:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.132 02:52:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.132 02:52:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.394 02:52:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.394 02:52:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.394 02:52:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.394 02:52:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.394 02:52:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.394 02:52:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.394 02:52:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.394 02:52:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.394 02:52:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.394 02:52:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.394 02:52:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.652 02:52:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.652 02:52:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.652 02:52:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.652 02:52:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.652 02:52:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.652 02:52:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.652 02:52:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:22.652 02:52:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.652 02:52:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.652 02:52:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.652 02:52:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.652 02:52:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.652 02:52:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:22.911 02:52:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:22.911 [2024-11-29 02:52:38.811604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.911 [2024-11-29 02:52:38.827392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.911 [2024-11-29 02:52:38.827403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.911 [2024-11-29 02:52:38.856185] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:22.911 [2024-11-29 02:52:38.856226] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.196 spdk_app_start Round 2 00:05:26.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.196 02:52:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.196 02:52:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:26.196 02:52:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70192 /var/tmp/spdk-nbd.sock 00:05:26.196 02:52:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70192 ']' 00:05:26.196 02:52:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.196 02:52:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.196 02:52:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.196 02:52:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.196 02:52:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.196 02:52:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.196 02:52:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:26.196 02:52:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.196 Malloc0 00:05:26.196 02:52:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.455 Malloc1 00:05:26.455 02:52:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.455 02:52:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.455 02:52:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.455 02:52:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.455 02:52:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.455 02:52:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.455 02:52:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.455 02:52:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.455 02:52:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.455 02:52:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.455 02:52:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.455 02:52:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.455 02:52:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.455 02:52:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.455 02:52:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.455 02:52:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.714 /dev/nbd0 00:05:26.714 02:52:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.714 02:52:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.714 02:52:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:26.714 02:52:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.714 02:52:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.714 02:52:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.714 02:52:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:26.714 02:52:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.714 02:52:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.714 02:52:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.714 02:52:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.714 1+0 records in 00:05:26.714 1+0 records out 00:05:26.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363269 s, 11.3 MB/s 00:05:26.714 02:52:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.714 02:52:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.714 02:52:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.714 02:52:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.714 02:52:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.714 02:52:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.714 02:52:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.714 02:52:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.972 /dev/nbd1 00:05:26.972 02:52:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.972 02:52:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.972 02:52:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:26.972 02:52:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.972 02:52:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.972 02:52:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.972 02:52:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:26.972 02:52:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.972 02:52:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.972 02:52:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.972 02:52:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.972 1+0 records in 00:05:26.972 1+0 records out 00:05:26.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274456 s, 14.9 MB/s 00:05:26.972 02:52:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.972 02:52:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.972 02:52:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.972 02:52:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.973 02:52:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.973 02:52:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.973 02:52:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.973 02:52:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.973 02:52:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.973 02:52:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.232 { 00:05:27.232 "nbd_device": "/dev/nbd0", 00:05:27.232 "bdev_name": "Malloc0" 00:05:27.232 }, 00:05:27.232 { 00:05:27.232 "nbd_device": "/dev/nbd1", 00:05:27.232 "bdev_name": "Malloc1" 00:05:27.232 } 00:05:27.232 ]' 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.232 { 00:05:27.232 "nbd_device": "/dev/nbd0", 00:05:27.232 "bdev_name": "Malloc0" 00:05:27.232 }, 00:05:27.232 { 00:05:27.232 "nbd_device": "/dev/nbd1", 00:05:27.232 "bdev_name": "Malloc1" 00:05:27.232 } 00:05:27.232 ]' 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.232 /dev/nbd1' 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.232 /dev/nbd1' 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.232 256+0 records in 00:05:27.232 256+0 records out 00:05:27.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00927375 s, 113 MB/s 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.232 256+0 records in 00:05:27.232 256+0 records out 00:05:27.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153149 s, 68.5 MB/s 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.232 256+0 records in 00:05:27.232 256+0 records out 00:05:27.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215131 s, 48.7 MB/s 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.232 02:52:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.492 02:52:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.492 02:52:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.492 02:52:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.492 02:52:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.492 02:52:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.492 02:52:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.492 02:52:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.492 02:52:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.492 02:52:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.492 02:52:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.751 02:52:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.751 02:52:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.751 02:52:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.751 02:52:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.751 02:52:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.751 02:52:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.751 02:52:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.751 02:52:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.751 02:52:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.751 02:52:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.751 02:52:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.008 02:52:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.008 02:52:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.008 02:52:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.008 02:52:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.008 02:52:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.008 02:52:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.008 02:52:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.008 02:52:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.008 02:52:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.008 02:52:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.008 02:52:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.008 02:52:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.008 02:52:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.265 02:52:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.265 [2024-11-29 02:52:44.105442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.265 [2024-11-29 02:52:44.121031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.265 [2024-11-29 02:52:44.121109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.265 [2024-11-29 02:52:44.150913] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.265 [2024-11-29 02:52:44.150960] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.546 02:52:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70192 /var/tmp/spdk-nbd.sock 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70192 ']' 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:31.546 02:52:47 event.app_repeat -- event/event.sh@39 -- # killprocess 70192 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70192 ']' 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70192 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70192 00:05:31.546 killing process with pid 70192 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70192' 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70192 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70192 00:05:31.546 spdk_app_start is called in Round 0. 00:05:31.546 Shutdown signal received, stop current app iteration 00:05:31.546 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 reinitialization... 00:05:31.546 spdk_app_start is called in Round 1. 00:05:31.546 Shutdown signal received, stop current app iteration 00:05:31.546 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 reinitialization... 00:05:31.546 spdk_app_start is called in Round 2. 00:05:31.546 Shutdown signal received, stop current app iteration 00:05:31.546 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 reinitialization... 00:05:31.546 spdk_app_start is called in Round 3. 00:05:31.546 Shutdown signal received, stop current app iteration 00:05:31.546 ************************************ 00:05:31.546 END TEST app_repeat 00:05:31.546 ************************************ 00:05:31.546 02:52:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:31.546 02:52:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:31.546 00:05:31.546 real 0m16.906s 00:05:31.546 user 0m38.004s 00:05:31.546 sys 0m1.927s 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.546 02:52:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.546 02:52:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:31.546 02:52:47 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:31.546 02:52:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.546 02:52:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.546 02:52:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.546 ************************************ 00:05:31.546 START TEST cpu_locks 00:05:31.546 ************************************ 00:05:31.546 02:52:47 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:31.547 * Looking for test storage... 00:05:31.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:31.547 02:52:47 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.547 02:52:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.547 02:52:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.806 02:52:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:31.806 02:52:47 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.807 02:52:47 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:31.807 02:52:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:31.807 02:52:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.807 02:52:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:31.807 02:52:47 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.807 02:52:47 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.807 02:52:47 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.807 02:52:47 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:31.807 02:52:47 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.807 02:52:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.807 --rc genhtml_branch_coverage=1 00:05:31.807 --rc genhtml_function_coverage=1 00:05:31.807 --rc genhtml_legend=1 00:05:31.807 --rc geninfo_all_blocks=1 00:05:31.807 --rc geninfo_unexecuted_blocks=1 00:05:31.807 00:05:31.807 ' 00:05:31.807 02:52:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.807 --rc genhtml_branch_coverage=1 00:05:31.807 --rc genhtml_function_coverage=1 00:05:31.807 --rc genhtml_legend=1 00:05:31.807 --rc geninfo_all_blocks=1 00:05:31.807 --rc geninfo_unexecuted_blocks=1 00:05:31.807 00:05:31.807 ' 00:05:31.807 02:52:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.807 --rc genhtml_branch_coverage=1 00:05:31.807 --rc genhtml_function_coverage=1 00:05:31.807 --rc genhtml_legend=1 00:05:31.807 --rc geninfo_all_blocks=1 00:05:31.807 --rc geninfo_unexecuted_blocks=1 00:05:31.807 00:05:31.807 ' 00:05:31.807 02:52:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.807 --rc genhtml_branch_coverage=1 00:05:31.807 --rc genhtml_function_coverage=1 00:05:31.807 --rc genhtml_legend=1 00:05:31.807 --rc geninfo_all_blocks=1 00:05:31.807 --rc geninfo_unexecuted_blocks=1 00:05:31.807 00:05:31.807 ' 00:05:31.807 02:52:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:31.807 02:52:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:31.807 02:52:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:31.807 02:52:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:31.807 02:52:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.807 02:52:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.807 02:52:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.807 ************************************ 00:05:31.807 START TEST default_locks 00:05:31.807 ************************************ 00:05:31.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.807 02:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:31.807 02:52:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70617 00:05:31.807 02:52:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70617 00:05:31.807 02:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70617 ']' 00:05:31.807 02:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.807 02:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.807 02:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.807 02:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.807 02:52:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.807 02:52:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.807 [2024-11-29 02:52:47.643822] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:31.807 [2024-11-29 02:52:47.643965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70617 ] 00:05:31.807 [2024-11-29 02:52:47.787584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.068 [2024-11-29 02:52:47.819047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.638 02:52:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.638 02:52:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:32.638 02:52:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70617 00:05:32.638 02:52:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.638 02:52:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70617 00:05:32.896 02:52:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70617 00:05:32.896 02:52:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70617 ']' 00:05:32.896 02:52:48 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70617 00:05:32.896 02:52:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:32.896 02:52:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.896 02:52:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70617 00:05:32.896 killing process with pid 70617 00:05:32.896 02:52:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.896 02:52:48 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.896 02:52:48 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70617' 00:05:32.896 02:52:48 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70617 00:05:32.896 02:52:48 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70617 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70617 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70617 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70617 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70617 ']' 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.155 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70617) - No such process 00:05:33.155 ERROR: process (pid: 70617) is no longer running 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:33.155 ************************************ 00:05:33.155 END TEST default_locks 00:05:33.155 ************************************ 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:33.155 00:05:33.155 real 0m1.482s 00:05:33.155 user 0m1.529s 00:05:33.155 sys 0m0.490s 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.155 02:52:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.155 02:52:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:33.155 02:52:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.155 02:52:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.155 02:52:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.155 ************************************ 00:05:33.155 START TEST default_locks_via_rpc 00:05:33.155 ************************************ 00:05:33.155 02:52:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:33.155 02:52:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70659 00:05:33.155 02:52:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70659 00:05:33.155 02:52:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70659 ']' 00:05:33.155 02:52:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.155 02:52:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.155 02:52:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.155 02:52:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.155 02:52:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.155 02:52:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.413 [2024-11-29 02:52:49.162636] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:33.413 [2024-11-29 02:52:49.162889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70659 ] 00:05:33.413 [2024-11-29 02:52:49.302596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.413 [2024-11-29 02:52:49.320878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70659 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70659 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70659 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 70659 ']' 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 70659 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70659 00:05:34.346 killing process with pid 70659 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70659' 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 70659 00:05:34.346 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 70659 00:05:34.605 ************************************ 00:05:34.605 END TEST default_locks_via_rpc 00:05:34.605 ************************************ 00:05:34.605 00:05:34.605 real 0m1.377s 00:05:34.605 user 0m1.418s 00:05:34.605 sys 0m0.400s 00:05:34.605 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.605 02:52:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.605 02:52:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:34.605 02:52:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.605 02:52:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.605 02:52:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.605 ************************************ 00:05:34.605 START TEST non_locking_app_on_locked_coremask 00:05:34.605 ************************************ 00:05:34.605 02:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:34.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.605 02:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70705 00:05:34.605 02:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70705 /var/tmp/spdk.sock 00:05:34.605 02:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70705 ']' 00:05:34.605 02:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.605 02:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.605 02:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.605 02:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.605 02:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.605 02:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.605 [2024-11-29 02:52:50.579740] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:34.605 [2024-11-29 02:52:50.579994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70705 ] 00:05:34.863 [2024-11-29 02:52:50.729491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.863 [2024-11-29 02:52:50.747590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.429 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.429 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.429 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70720 00:05:35.429 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:35.429 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70720 /var/tmp/spdk2.sock 00:05:35.429 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70720 ']' 00:05:35.429 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.429 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.429 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.429 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.429 02:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.687 [2024-11-29 02:52:51.476172] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:35.687 [2024-11-29 02:52:51.476413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70720 ] 00:05:35.687 [2024-11-29 02:52:51.633690] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.687 [2024-11-29 02:52:51.633736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.687 [2024-11-29 02:52:51.672007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.621 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.621 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:36.621 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70705 00:05:36.621 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70705 00:05:36.621 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.879 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70705 00:05:36.879 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70705 ']' 00:05:36.879 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70705 00:05:36.879 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.879 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.879 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70705 00:05:36.879 killing process with pid 70705 00:05:36.879 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.879 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.879 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70705' 00:05:36.879 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70705 00:05:36.879 02:52:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70705 00:05:37.136 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70720 00:05:37.136 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70720 ']' 00:05:37.136 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70720 00:05:37.136 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:37.136 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.136 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70720 00:05:37.136 killing process with pid 70720 00:05:37.136 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.136 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.136 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70720' 00:05:37.136 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70720 00:05:37.136 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70720 00:05:37.395 ************************************ 00:05:37.395 END TEST non_locking_app_on_locked_coremask 00:05:37.395 ************************************ 00:05:37.395 00:05:37.395 real 0m2.790s 00:05:37.395 user 0m3.074s 00:05:37.395 sys 0m0.734s 00:05:37.395 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.395 02:52:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.395 02:52:53 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:37.395 02:52:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.395 02:52:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.395 02:52:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.395 ************************************ 00:05:37.395 START TEST locking_app_on_unlocked_coremask 00:05:37.395 ************************************ 00:05:37.395 02:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:37.395 02:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70774 00:05:37.395 02:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70774 /var/tmp/spdk.sock 00:05:37.395 02:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:37.395 02:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70774 ']' 00:05:37.395 02:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.395 02:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.395 02:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.395 02:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.395 02:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.653 [2024-11-29 02:52:53.414883] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:37.653 [2024-11-29 02:52:53.415009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70774 ] 00:05:37.653 [2024-11-29 02:52:53.557393] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.653 [2024-11-29 02:52:53.557428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.653 [2024-11-29 02:52:53.573909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.599 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.600 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:38.600 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70790 00:05:38.600 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70790 /var/tmp/spdk2.sock 00:05:38.600 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70790 ']' 00:05:38.600 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:38.600 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.600 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.600 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.600 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.600 02:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.600 [2024-11-29 02:52:54.309655] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:38.600 [2024-11-29 02:52:54.309907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70790 ] 00:05:38.600 [2024-11-29 02:52:54.457099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.600 [2024-11-29 02:52:54.489499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.177 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.177 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:39.177 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70790 00:05:39.177 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70790 00:05:39.177 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.743 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70774 00:05:39.743 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70774 ']' 00:05:39.743 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70774 00:05:39.743 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.743 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.743 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70774 00:05:39.743 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.743 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.744 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70774' 00:05:39.744 killing process with pid 70774 00:05:39.744 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70774 00:05:39.744 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70774 00:05:40.002 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70790 00:05:40.002 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70790 ']' 00:05:40.002 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70790 00:05:40.002 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.002 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.002 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70790 00:05:40.002 killing process with pid 70790 00:05:40.002 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.002 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.002 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70790' 00:05:40.002 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70790 00:05:40.002 02:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70790 00:05:40.262 00:05:40.262 real 0m2.775s 00:05:40.262 user 0m3.090s 00:05:40.262 sys 0m0.716s 00:05:40.262 02:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.262 02:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.262 ************************************ 00:05:40.262 END TEST locking_app_on_unlocked_coremask 00:05:40.262 ************************************ 00:05:40.262 02:52:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:40.262 02:52:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.262 02:52:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.262 02:52:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.262 ************************************ 00:05:40.262 START TEST locking_app_on_locked_coremask 00:05:40.262 ************************************ 00:05:40.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.262 02:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:40.262 02:52:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70848 00:05:40.262 02:52:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70848 /var/tmp/spdk.sock 00:05:40.262 02:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70848 ']' 00:05:40.262 02:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.262 02:52:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.262 02:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.262 02:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.262 02:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.262 02:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.262 [2024-11-29 02:52:56.223937] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:40.262 [2024-11-29 02:52:56.224350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70848 ] 00:05:40.521 [2024-11-29 02:52:56.365467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.521 [2024-11-29 02:52:56.382044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70864 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70864 /var/tmp/spdk2.sock 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70864 /var/tmp/spdk2.sock 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:41.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 70864 /var/tmp/spdk2.sock 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70864 ']' 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.087 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.346 [2024-11-29 02:52:57.124749] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:41.346 [2024-11-29 02:52:57.124883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70864 ] 00:05:41.346 [2024-11-29 02:52:57.272093] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70848 has claimed it. 00:05:41.346 [2024-11-29 02:52:57.272145] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:41.912 ERROR: process (pid: 70864) is no longer running 00:05:41.912 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70864) - No such process 00:05:41.912 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.912 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:41.912 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:41.912 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.912 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:41.912 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.912 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70848 00:05:41.912 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70848 00:05:41.912 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.170 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70848 00:05:42.170 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70848 ']' 00:05:42.170 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70848 00:05:42.170 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:42.170 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.170 02:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70848 00:05:42.170 killing process with pid 70848 00:05:42.170 02:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.170 02:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.170 02:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70848' 00:05:42.170 02:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70848 00:05:42.170 02:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70848 00:05:42.428 ************************************ 00:05:42.428 END TEST locking_app_on_locked_coremask 00:05:42.428 ************************************ 00:05:42.428 00:05:42.428 real 0m2.081s 00:05:42.428 user 0m2.348s 00:05:42.428 sys 0m0.480s 00:05:42.428 02:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.428 02:52:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.429 02:52:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:42.429 02:52:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.429 02:52:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.429 02:52:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.429 ************************************ 00:05:42.429 START TEST locking_overlapped_coremask 00:05:42.429 ************************************ 00:05:42.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.429 02:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:42.429 02:52:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70906 00:05:42.429 02:52:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 70906 /var/tmp/spdk.sock 00:05:42.429 02:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 70906 ']' 00:05:42.429 02:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.429 02:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.429 02:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.429 02:52:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:42.429 02:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.429 02:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.429 [2024-11-29 02:52:58.333482] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:42.429 [2024-11-29 02:52:58.333581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70906 ] 00:05:42.687 [2024-11-29 02:52:58.470491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.687 [2024-11-29 02:52:58.489341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.687 [2024-11-29 02:52:58.489665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.687 [2024-11-29 02:52:58.489726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70924 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70924 /var/tmp/spdk2.sock 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70924 /var/tmp/spdk2.sock 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 70924 /var/tmp/spdk2.sock 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 70924 ']' 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.252 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.511 [2024-11-29 02:52:59.250781] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:43.511 [2024-11-29 02:52:59.251078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70924 ] 00:05:43.511 [2024-11-29 02:52:59.410184] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70906 has claimed it. 00:05:43.511 [2024-11-29 02:52:59.410261] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:44.077 ERROR: process (pid: 70924) is no longer running 00:05:44.077 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70924) - No such process 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 70906 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 70906 ']' 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 70906 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70906 00:05:44.077 killing process with pid 70906 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70906' 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 70906 00:05:44.077 02:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 70906 00:05:44.335 00:05:44.335 real 0m1.855s 00:05:44.335 user 0m5.250s 00:05:44.335 sys 0m0.346s 00:05:44.335 02:53:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.335 ************************************ 00:05:44.335 END TEST locking_overlapped_coremask 00:05:44.335 ************************************ 00:05:44.335 02:53:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.335 02:53:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:44.335 02:53:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.335 02:53:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.335 02:53:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.335 ************************************ 00:05:44.335 START TEST locking_overlapped_coremask_via_rpc 00:05:44.335 ************************************ 00:05:44.335 02:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:44.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.335 02:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70966 00:05:44.335 02:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 70966 /var/tmp/spdk.sock 00:05:44.335 02:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70966 ']' 00:05:44.335 02:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.335 02:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.335 02:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.335 02:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.335 02:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.335 02:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:44.335 [2024-11-29 02:53:00.243005] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:44.335 [2024-11-29 02:53:00.243123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70966 ] 00:05:44.593 [2024-11-29 02:53:00.386931] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.593 [2024-11-29 02:53:00.386968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.593 [2024-11-29 02:53:00.405257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.593 [2024-11-29 02:53:00.405538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.593 [2024-11-29 02:53:00.405611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.158 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.158 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.158 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70984 00:05:45.158 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 70984 /var/tmp/spdk2.sock 00:05:45.158 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70984 ']' 00:05:45.158 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.158 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:45.158 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.158 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.158 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.158 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.158 [2024-11-29 02:53:01.137127] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:45.158 [2024-11-29 02:53:01.137415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70984 ] 00:05:45.416 [2024-11-29 02:53:01.298866] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.416 [2024-11-29 02:53:01.298917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.416 [2024-11-29 02:53:01.339253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.416 [2024-11-29 02:53:01.343034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.416 [2024-11-29 02:53:01.343112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:45.980 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.981 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.981 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.981 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.981 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.238 [2024-11-29 02:53:01.987961] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70966 has claimed it. 00:05:46.238 request: 00:05:46.238 { 00:05:46.238 "method": "framework_enable_cpumask_locks", 00:05:46.238 "req_id": 1 00:05:46.238 } 00:05:46.238 Got JSON-RPC error response 00:05:46.238 response: 00:05:46.238 { 00:05:46.238 "code": -32603, 00:05:46.238 "message": "Failed to claim CPU core: 2" 00:05:46.238 } 00:05:46.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 70966 /var/tmp/spdk.sock 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70966 ']' 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.238 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.239 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.239 02:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.239 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.239 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:46.239 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 70984 /var/tmp/spdk2.sock 00:05:46.239 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70984 ']' 00:05:46.239 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.239 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.239 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.239 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.239 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.496 ************************************ 00:05:46.496 END TEST locking_overlapped_coremask_via_rpc 00:05:46.496 ************************************ 00:05:46.496 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.496 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:46.496 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:46.496 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:46.496 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:46.496 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:46.496 00:05:46.496 real 0m2.234s 00:05:46.496 user 0m1.039s 00:05:46.496 sys 0m0.133s 00:05:46.496 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.496 02:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.496 02:53:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:46.496 02:53:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70966 ]] 00:05:46.496 02:53:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70966 00:05:46.496 02:53:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 70966 ']' 00:05:46.496 02:53:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 70966 00:05:46.496 02:53:02 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:46.497 02:53:02 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.497 02:53:02 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70966 00:05:46.497 killing process with pid 70966 00:05:46.497 02:53:02 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.497 02:53:02 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.497 02:53:02 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70966' 00:05:46.497 02:53:02 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 70966 00:05:46.497 02:53:02 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 70966 00:05:46.754 02:53:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70984 ]] 00:05:46.754 02:53:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70984 00:05:46.754 02:53:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 70984 ']' 00:05:46.754 02:53:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 70984 00:05:46.754 02:53:02 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:46.754 02:53:02 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.754 02:53:02 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70984 00:05:46.754 killing process with pid 70984 00:05:46.754 02:53:02 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:46.754 02:53:02 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:46.754 02:53:02 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70984' 00:05:46.754 02:53:02 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 70984 00:05:46.754 02:53:02 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 70984 00:05:47.321 02:53:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:47.321 Process with pid 70966 is not found 00:05:47.321 02:53:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:47.321 02:53:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70966 ]] 00:05:47.321 02:53:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70966 00:05:47.321 02:53:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 70966 ']' 00:05:47.321 02:53:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 70966 00:05:47.321 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (70966) - No such process 00:05:47.321 02:53:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 70966 is not found' 00:05:47.321 02:53:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70984 ]] 00:05:47.321 02:53:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70984 00:05:47.321 Process with pid 70984 is not found 00:05:47.321 02:53:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 70984 ']' 00:05:47.321 02:53:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 70984 00:05:47.321 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (70984) - No such process 00:05:47.321 02:53:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 70984 is not found' 00:05:47.321 02:53:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:47.321 ************************************ 00:05:47.321 END TEST cpu_locks 00:05:47.321 ************************************ 00:05:47.321 00:05:47.321 real 0m15.608s 00:05:47.321 user 0m27.991s 00:05:47.321 sys 0m4.015s 00:05:47.321 02:53:03 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.321 02:53:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.321 ************************************ 00:05:47.321 END TEST event 00:05:47.321 ************************************ 00:05:47.321 00:05:47.321 real 0m39.024s 00:05:47.321 user 1m16.992s 00:05:47.321 sys 0m6.693s 00:05:47.321 02:53:03 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.321 02:53:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.321 02:53:03 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:47.321 02:53:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.321 02:53:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.321 02:53:03 -- common/autotest_common.sh@10 -- # set +x 00:05:47.321 ************************************ 00:05:47.321 START TEST thread 00:05:47.321 ************************************ 00:05:47.321 02:53:03 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:47.321 * Looking for test storage... 00:05:47.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:47.321 02:53:03 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.321 02:53:03 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.321 02:53:03 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.321 02:53:03 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.321 02:53:03 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.321 02:53:03 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.321 02:53:03 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.321 02:53:03 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.321 02:53:03 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.321 02:53:03 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.321 02:53:03 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.321 02:53:03 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.321 02:53:03 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.321 02:53:03 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.321 02:53:03 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.321 02:53:03 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:47.321 02:53:03 thread -- scripts/common.sh@345 -- # : 1 00:05:47.321 02:53:03 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.321 02:53:03 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.321 02:53:03 thread -- scripts/common.sh@365 -- # decimal 1 00:05:47.321 02:53:03 thread -- scripts/common.sh@353 -- # local d=1 00:05:47.321 02:53:03 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.321 02:53:03 thread -- scripts/common.sh@355 -- # echo 1 00:05:47.321 02:53:03 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.321 02:53:03 thread -- scripts/common.sh@366 -- # decimal 2 00:05:47.321 02:53:03 thread -- scripts/common.sh@353 -- # local d=2 00:05:47.321 02:53:03 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.321 02:53:03 thread -- scripts/common.sh@355 -- # echo 2 00:05:47.321 02:53:03 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.321 02:53:03 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.321 02:53:03 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.321 02:53:03 thread -- scripts/common.sh@368 -- # return 0 00:05:47.321 02:53:03 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.321 02:53:03 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.321 --rc genhtml_branch_coverage=1 00:05:47.321 --rc genhtml_function_coverage=1 00:05:47.321 --rc genhtml_legend=1 00:05:47.321 --rc geninfo_all_blocks=1 00:05:47.321 --rc geninfo_unexecuted_blocks=1 00:05:47.321 00:05:47.321 ' 00:05:47.321 02:53:03 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.321 --rc genhtml_branch_coverage=1 00:05:47.321 --rc genhtml_function_coverage=1 00:05:47.321 --rc genhtml_legend=1 00:05:47.321 --rc geninfo_all_blocks=1 00:05:47.321 --rc geninfo_unexecuted_blocks=1 00:05:47.321 00:05:47.321 ' 00:05:47.321 02:53:03 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.321 --rc genhtml_branch_coverage=1 00:05:47.321 --rc genhtml_function_coverage=1 00:05:47.321 --rc genhtml_legend=1 00:05:47.321 --rc geninfo_all_blocks=1 00:05:47.321 --rc geninfo_unexecuted_blocks=1 00:05:47.321 00:05:47.321 ' 00:05:47.321 02:53:03 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.321 --rc genhtml_branch_coverage=1 00:05:47.321 --rc genhtml_function_coverage=1 00:05:47.321 --rc genhtml_legend=1 00:05:47.321 --rc geninfo_all_blocks=1 00:05:47.321 --rc geninfo_unexecuted_blocks=1 00:05:47.321 00:05:47.321 ' 00:05:47.321 02:53:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.321 02:53:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:47.321 02:53:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.321 02:53:03 thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.321 ************************************ 00:05:47.321 START TEST thread_poller_perf 00:05:47.321 ************************************ 00:05:47.321 02:53:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.321 [2024-11-29 02:53:03.274212] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:47.321 [2024-11-29 02:53:03.274691] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71111 ] 00:05:47.581 [2024-11-29 02:53:03.421607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.581 [2024-11-29 02:53:03.440810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.581 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:48.515 [2024-11-29T02:53:04.507Z] ====================================== 00:05:48.515 [2024-11-29T02:53:04.507Z] busy:2615383550 (cyc) 00:05:48.515 [2024-11-29T02:53:04.507Z] total_run_count: 306000 00:05:48.515 [2024-11-29T02:53:04.507Z] tsc_hz: 2600000000 (cyc) 00:05:48.515 [2024-11-29T02:53:04.507Z] ====================================== 00:05:48.515 [2024-11-29T02:53:04.507Z] poller_cost: 8547 (cyc), 3287 (nsec) 00:05:48.515 00:05:48.515 real 0m1.241s 00:05:48.515 user 0m1.082s 00:05:48.515 sys 0m0.052s 00:05:48.515 02:53:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.515 02:53:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.515 ************************************ 00:05:48.515 END TEST thread_poller_perf 00:05:48.515 ************************************ 00:05:48.773 02:53:04 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:48.773 02:53:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:48.773 02:53:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.773 02:53:04 thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.773 ************************************ 00:05:48.773 START TEST thread_poller_perf 00:05:48.773 ************************************ 00:05:48.773 02:53:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:48.773 [2024-11-29 02:53:04.553602] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:48.773 [2024-11-29 02:53:04.553817] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71142 ] 00:05:48.773 [2024-11-29 02:53:04.697207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.773 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:48.773 [2024-11-29 02:53:04.715704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.150 [2024-11-29T02:53:06.142Z] ====================================== 00:05:50.150 [2024-11-29T02:53:06.142Z] busy:2603242688 (cyc) 00:05:50.150 [2024-11-29T02:53:06.142Z] total_run_count: 3957000 00:05:50.150 [2024-11-29T02:53:06.142Z] tsc_hz: 2600000000 (cyc) 00:05:50.150 [2024-11-29T02:53:06.142Z] ====================================== 00:05:50.150 [2024-11-29T02:53:06.142Z] poller_cost: 657 (cyc), 252 (nsec) 00:05:50.150 00:05:50.150 real 0m1.231s 00:05:50.150 user 0m1.070s 00:05:50.150 sys 0m0.054s 00:05:50.150 ************************************ 00:05:50.150 END TEST thread_poller_perf 00:05:50.150 ************************************ 00:05:50.150 02:53:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.150 02:53:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.150 02:53:05 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:50.150 ************************************ 00:05:50.150 END TEST thread 00:05:50.150 ************************************ 00:05:50.150 00:05:50.150 real 0m2.705s 00:05:50.150 user 0m2.262s 00:05:50.150 sys 0m0.231s 00:05:50.150 02:53:05 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.150 02:53:05 thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.150 02:53:05 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:50.150 02:53:05 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:50.150 02:53:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.150 02:53:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.150 02:53:05 -- common/autotest_common.sh@10 -- # set +x 00:05:50.150 ************************************ 00:05:50.150 START TEST app_cmdline 00:05:50.150 ************************************ 00:05:50.150 02:53:05 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:50.150 * Looking for test storage... 00:05:50.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:50.150 02:53:05 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.150 02:53:05 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.150 02:53:05 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.150 02:53:05 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.150 02:53:05 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:50.150 02:53:05 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.150 02:53:05 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.150 --rc genhtml_branch_coverage=1 00:05:50.150 --rc genhtml_function_coverage=1 00:05:50.150 --rc genhtml_legend=1 00:05:50.150 --rc geninfo_all_blocks=1 00:05:50.150 --rc geninfo_unexecuted_blocks=1 00:05:50.150 00:05:50.150 ' 00:05:50.150 02:53:05 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.150 --rc genhtml_branch_coverage=1 00:05:50.150 --rc genhtml_function_coverage=1 00:05:50.150 --rc genhtml_legend=1 00:05:50.150 --rc geninfo_all_blocks=1 00:05:50.150 --rc geninfo_unexecuted_blocks=1 00:05:50.150 00:05:50.150 ' 00:05:50.150 02:53:05 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.150 --rc genhtml_branch_coverage=1 00:05:50.150 --rc genhtml_function_coverage=1 00:05:50.150 --rc genhtml_legend=1 00:05:50.150 --rc geninfo_all_blocks=1 00:05:50.150 --rc geninfo_unexecuted_blocks=1 00:05:50.150 00:05:50.150 ' 00:05:50.150 02:53:05 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.150 --rc genhtml_branch_coverage=1 00:05:50.150 --rc genhtml_function_coverage=1 00:05:50.150 --rc genhtml_legend=1 00:05:50.150 --rc geninfo_all_blocks=1 00:05:50.150 --rc geninfo_unexecuted_blocks=1 00:05:50.150 00:05:50.150 ' 00:05:50.150 02:53:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:50.150 02:53:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71231 00:05:50.150 02:53:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71231 00:05:50.150 02:53:05 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71231 ']' 00:05:50.150 02:53:05 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:50.151 02:53:05 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.151 02:53:05 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.151 02:53:05 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.151 02:53:05 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.151 02:53:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:50.151 [2024-11-29 02:53:06.046788] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:50.151 [2024-11-29 02:53:06.047061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71231 ] 00:05:50.410 [2024-11-29 02:53:06.189653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.410 [2024-11-29 02:53:06.207926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.977 02:53:06 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.977 02:53:06 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:50.977 02:53:06 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:51.236 { 00:05:51.236 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:05:51.236 "fields": { 00:05:51.236 "major": 25, 00:05:51.236 "minor": 1, 00:05:51.236 "patch": 0, 00:05:51.236 "suffix": "-pre", 00:05:51.236 "commit": "35cd3e84d" 00:05:51.236 } 00:05:51.236 } 00:05:51.236 02:53:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:51.236 02:53:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:51.236 02:53:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:51.236 02:53:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:51.236 02:53:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:51.236 02:53:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:51.236 02:53:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:51.236 02:53:07 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.236 02:53:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.236 02:53:07 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.236 02:53:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:51.236 02:53:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:51.236 02:53:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.236 02:53:07 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:51.236 02:53:07 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.236 02:53:07 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:51.236 02:53:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.236 02:53:07 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:51.236 02:53:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.236 02:53:07 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:51.236 02:53:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.236 02:53:07 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:51.236 02:53:07 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:51.236 02:53:07 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.494 request: 00:05:51.494 { 00:05:51.494 "method": "env_dpdk_get_mem_stats", 00:05:51.494 "req_id": 1 00:05:51.494 } 00:05:51.494 Got JSON-RPC error response 00:05:51.494 response: 00:05:51.494 { 00:05:51.494 "code": -32601, 00:05:51.494 "message": "Method not found" 00:05:51.494 } 00:05:51.494 02:53:07 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:51.494 02:53:07 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:51.494 02:53:07 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:51.494 02:53:07 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:51.494 02:53:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71231 00:05:51.494 02:53:07 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71231 ']' 00:05:51.494 02:53:07 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71231 00:05:51.494 02:53:07 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:51.495 02:53:07 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.495 02:53:07 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71231 00:05:51.495 killing process with pid 71231 00:05:51.495 02:53:07 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.495 02:53:07 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.495 02:53:07 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71231' 00:05:51.495 02:53:07 app_cmdline -- common/autotest_common.sh@973 -- # kill 71231 00:05:51.495 02:53:07 app_cmdline -- common/autotest_common.sh@978 -- # wait 71231 00:05:51.755 ************************************ 00:05:51.755 END TEST app_cmdline 00:05:51.755 ************************************ 00:05:51.755 00:05:51.755 real 0m1.727s 00:05:51.755 user 0m2.020s 00:05:51.755 sys 0m0.398s 00:05:51.755 02:53:07 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.755 02:53:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.755 02:53:07 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:51.755 02:53:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.755 02:53:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.755 02:53:07 -- common/autotest_common.sh@10 -- # set +x 00:05:51.755 ************************************ 00:05:51.755 START TEST version 00:05:51.755 ************************************ 00:05:51.755 02:53:07 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:51.755 * Looking for test storage... 00:05:51.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:51.755 02:53:07 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.755 02:53:07 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.755 02:53:07 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.017 02:53:07 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.017 02:53:07 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.017 02:53:07 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.017 02:53:07 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.017 02:53:07 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.017 02:53:07 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.017 02:53:07 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.017 02:53:07 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.017 02:53:07 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.017 02:53:07 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.017 02:53:07 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.017 02:53:07 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.017 02:53:07 version -- scripts/common.sh@344 -- # case "$op" in 00:05:52.017 02:53:07 version -- scripts/common.sh@345 -- # : 1 00:05:52.017 02:53:07 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.017 02:53:07 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.017 02:53:07 version -- scripts/common.sh@365 -- # decimal 1 00:05:52.017 02:53:07 version -- scripts/common.sh@353 -- # local d=1 00:05:52.017 02:53:07 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.017 02:53:07 version -- scripts/common.sh@355 -- # echo 1 00:05:52.017 02:53:07 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.017 02:53:07 version -- scripts/common.sh@366 -- # decimal 2 00:05:52.017 02:53:07 version -- scripts/common.sh@353 -- # local d=2 00:05:52.017 02:53:07 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.017 02:53:07 version -- scripts/common.sh@355 -- # echo 2 00:05:52.017 02:53:07 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.017 02:53:07 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.017 02:53:07 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.017 02:53:07 version -- scripts/common.sh@368 -- # return 0 00:05:52.017 02:53:07 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.017 02:53:07 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.017 --rc genhtml_branch_coverage=1 00:05:52.017 --rc genhtml_function_coverage=1 00:05:52.017 --rc genhtml_legend=1 00:05:52.017 --rc geninfo_all_blocks=1 00:05:52.017 --rc geninfo_unexecuted_blocks=1 00:05:52.017 00:05:52.017 ' 00:05:52.017 02:53:07 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.017 --rc genhtml_branch_coverage=1 00:05:52.017 --rc genhtml_function_coverage=1 00:05:52.017 --rc genhtml_legend=1 00:05:52.017 --rc geninfo_all_blocks=1 00:05:52.017 --rc geninfo_unexecuted_blocks=1 00:05:52.017 00:05:52.017 ' 00:05:52.018 02:53:07 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.018 --rc genhtml_branch_coverage=1 00:05:52.018 --rc genhtml_function_coverage=1 00:05:52.018 --rc genhtml_legend=1 00:05:52.018 --rc geninfo_all_blocks=1 00:05:52.018 --rc geninfo_unexecuted_blocks=1 00:05:52.018 00:05:52.018 ' 00:05:52.018 02:53:07 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.018 --rc genhtml_branch_coverage=1 00:05:52.018 --rc genhtml_function_coverage=1 00:05:52.018 --rc genhtml_legend=1 00:05:52.018 --rc geninfo_all_blocks=1 00:05:52.018 --rc geninfo_unexecuted_blocks=1 00:05:52.018 00:05:52.018 ' 00:05:52.018 02:53:07 version -- app/version.sh@17 -- # get_header_version major 00:05:52.018 02:53:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:52.018 02:53:07 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.018 02:53:07 version -- app/version.sh@14 -- # cut -f2 00:05:52.018 02:53:07 version -- app/version.sh@17 -- # major=25 00:05:52.018 02:53:07 version -- app/version.sh@18 -- # get_header_version minor 00:05:52.018 02:53:07 version -- app/version.sh@14 -- # cut -f2 00:05:52.018 02:53:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:52.018 02:53:07 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.018 02:53:07 version -- app/version.sh@18 -- # minor=1 00:05:52.018 02:53:07 version -- app/version.sh@19 -- # get_header_version patch 00:05:52.018 02:53:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:52.018 02:53:07 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.018 02:53:07 version -- app/version.sh@14 -- # cut -f2 00:05:52.018 02:53:07 version -- app/version.sh@19 -- # patch=0 00:05:52.018 02:53:07 version -- app/version.sh@20 -- # get_header_version suffix 00:05:52.018 02:53:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:52.018 02:53:07 version -- app/version.sh@14 -- # cut -f2 00:05:52.018 02:53:07 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.018 02:53:07 version -- app/version.sh@20 -- # suffix=-pre 00:05:52.018 02:53:07 version -- app/version.sh@22 -- # version=25.1 00:05:52.018 02:53:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:52.018 02:53:07 version -- app/version.sh@28 -- # version=25.1rc0 00:05:52.018 02:53:07 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:52.018 02:53:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:52.018 02:53:07 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:52.018 02:53:07 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:52.018 00:05:52.018 real 0m0.202s 00:05:52.018 user 0m0.117s 00:05:52.018 sys 0m0.110s 00:05:52.018 ************************************ 00:05:52.018 END TEST version 00:05:52.018 ************************************ 00:05:52.018 02:53:07 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.018 02:53:07 version -- common/autotest_common.sh@10 -- # set +x 00:05:52.018 02:53:07 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:52.018 02:53:07 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:52.018 02:53:07 -- spdk/autotest.sh@194 -- # uname -s 00:05:52.018 02:53:07 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:52.018 02:53:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:52.018 02:53:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:52.018 02:53:07 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:52.018 02:53:07 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:52.018 02:53:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:52.018 02:53:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.018 02:53:07 -- common/autotest_common.sh@10 -- # set +x 00:05:52.018 ************************************ 00:05:52.018 START TEST blockdev_nvme 00:05:52.018 ************************************ 00:05:52.018 02:53:07 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:52.018 * Looking for test storage... 00:05:52.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:52.018 02:53:07 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.018 02:53:07 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.018 02:53:07 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.280 02:53:08 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.280 02:53:08 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:52.280 02:53:08 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.280 02:53:08 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.280 --rc genhtml_branch_coverage=1 00:05:52.280 --rc genhtml_function_coverage=1 00:05:52.280 --rc genhtml_legend=1 00:05:52.280 --rc geninfo_all_blocks=1 00:05:52.280 --rc geninfo_unexecuted_blocks=1 00:05:52.280 00:05:52.280 ' 00:05:52.280 02:53:08 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.280 --rc genhtml_branch_coverage=1 00:05:52.280 --rc genhtml_function_coverage=1 00:05:52.280 --rc genhtml_legend=1 00:05:52.280 --rc geninfo_all_blocks=1 00:05:52.280 --rc geninfo_unexecuted_blocks=1 00:05:52.280 00:05:52.280 ' 00:05:52.280 02:53:08 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.280 --rc genhtml_branch_coverage=1 00:05:52.280 --rc genhtml_function_coverage=1 00:05:52.280 --rc genhtml_legend=1 00:05:52.280 --rc geninfo_all_blocks=1 00:05:52.280 --rc geninfo_unexecuted_blocks=1 00:05:52.280 00:05:52.280 ' 00:05:52.280 02:53:08 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.280 --rc genhtml_branch_coverage=1 00:05:52.280 --rc genhtml_function_coverage=1 00:05:52.280 --rc genhtml_legend=1 00:05:52.280 --rc geninfo_all_blocks=1 00:05:52.280 --rc geninfo_unexecuted_blocks=1 00:05:52.280 00:05:52.280 ' 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:52.280 02:53:08 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71392 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71392 00:05:52.280 02:53:08 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:52.280 02:53:08 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71392 ']' 00:05:52.280 02:53:08 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.280 02:53:08 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.280 02:53:08 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.280 02:53:08 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.280 02:53:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:52.280 [2024-11-29 02:53:08.117311] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:52.280 [2024-11-29 02:53:08.117706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71392 ] 00:05:52.280 [2024-11-29 02:53:08.263667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.542 [2024-11-29 02:53:08.293478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.116 02:53:08 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.116 02:53:08 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:53.116 02:53:08 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:05:53.116 02:53:08 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:05:53.116 02:53:08 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:53.116 02:53:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:53.116 02:53:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:53.116 02:53:09 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:53.116 02:53:09 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.116 02:53:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.378 02:53:09 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.378 02:53:09 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:05:53.378 02:53:09 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.378 02:53:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.378 02:53:09 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.378 02:53:09 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:05:53.378 02:53:09 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:05:53.378 02:53:09 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.378 02:53:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.378 02:53:09 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.378 02:53:09 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:05:53.378 02:53:09 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.378 02:53:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.378 02:53:09 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.378 02:53:09 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:53.378 02:53:09 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.378 02:53:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.378 02:53:09 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.378 02:53:09 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:05:53.378 02:53:09 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:05:53.378 02:53:09 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:05:53.378 02:53:09 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.378 02:53:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.640 02:53:09 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.640 02:53:09 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:05:53.640 02:53:09 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:05:53.640 02:53:09 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "618e442b-466a-45b2-b28c-87b8bdbd6342"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "618e442b-466a-45b2-b28c-87b8bdbd6342",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "7132ca7f-c2dc-4e68-9ea2-3bd3f1ebb6fb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7132ca7f-c2dc-4e68-9ea2-3bd3f1ebb6fb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "0454a762-eb02-4e42-9743-e76ae544fb2f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0454a762-eb02-4e42-9743-e76ae544fb2f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "94b54a25-3e44-4f58-9c5c-6c997aec9a75"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "94b54a25-3e44-4f58-9c5c-6c997aec9a75",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d0e0b9a3-ccf2-498f-952e-dedb584a8dcc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d0e0b9a3-ccf2-498f-952e-dedb584a8dcc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b08b464e-aea0-423b-a765-f038f41e7ef6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b08b464e-aea0-423b-a765-f038f41e7ef6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:53.640 02:53:09 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:05:53.640 02:53:09 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:05:53.640 02:53:09 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:05:53.640 02:53:09 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 71392 00:05:53.640 02:53:09 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71392 ']' 00:05:53.640 02:53:09 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71392 00:05:53.640 02:53:09 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:53.641 02:53:09 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.641 02:53:09 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71392 00:05:53.641 killing process with pid 71392 00:05:53.641 02:53:09 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.641 02:53:09 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.641 02:53:09 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71392' 00:05:53.641 02:53:09 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71392 00:05:53.641 02:53:09 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71392 00:05:53.901 02:53:09 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:53.901 02:53:09 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:53.901 02:53:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:53.901 02:53:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.901 02:53:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.901 ************************************ 00:05:53.901 START TEST bdev_hello_world 00:05:53.901 ************************************ 00:05:53.901 02:53:09 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:53.901 [2024-11-29 02:53:09.798197] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:53.901 [2024-11-29 02:53:09.798341] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71454 ] 00:05:54.162 [2024-11-29 02:53:09.945136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.162 [2024-11-29 02:53:09.975955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.473 [2024-11-29 02:53:10.377133] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:54.473 [2024-11-29 02:53:10.377202] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:54.473 [2024-11-29 02:53:10.377226] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:54.473 [2024-11-29 02:53:10.379670] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:54.473 [2024-11-29 02:53:10.380567] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:54.473 [2024-11-29 02:53:10.380607] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:54.473 [2024-11-29 02:53:10.381268] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:54.473 00:05:54.473 [2024-11-29 02:53:10.381351] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:54.754 ************************************ 00:05:54.754 END TEST bdev_hello_world 00:05:54.754 00:05:54.754 real 0m0.839s 00:05:54.754 user 0m0.529s 00:05:54.754 sys 0m0.204s 00:05:54.754 02:53:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.754 02:53:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:54.754 ************************************ 00:05:54.754 02:53:10 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:05:54.754 02:53:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:54.754 02:53:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.754 02:53:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:54.754 ************************************ 00:05:54.754 START TEST bdev_bounds 00:05:54.754 ************************************ 00:05:54.754 02:53:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:05:54.754 Process bdevio pid: 71485 00:05:54.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.754 02:53:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71485 00:05:54.754 02:53:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.754 02:53:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71485' 00:05:54.754 02:53:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71485 00:05:54.754 02:53:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71485 ']' 00:05:54.754 02:53:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.754 02:53:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.754 02:53:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.754 02:53:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.754 02:53:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:54.754 02:53:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:54.754 [2024-11-29 02:53:10.699077] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:54.755 [2024-11-29 02:53:10.699216] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71485 ] 00:05:55.031 [2024-11-29 02:53:10.846537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.031 [2024-11-29 02:53:10.870923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.031 [2024-11-29 02:53:10.871110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.031 [2024-11-29 02:53:10.871180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.601 02:53:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.601 02:53:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:05:55.601 02:53:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:55.861 I/O targets: 00:05:55.861 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:55.861 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:55.861 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:55.861 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:55.861 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:55.861 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:55.861 00:05:55.861 00:05:55.861 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.861 http://cunit.sourceforge.net/ 00:05:55.861 00:05:55.861 00:05:55.861 Suite: bdevio tests on: Nvme3n1 00:05:55.861 Test: blockdev write read block ...passed 00:05:55.861 Test: blockdev write zeroes read block ...passed 00:05:55.861 Test: blockdev write zeroes read no split ...passed 00:05:55.861 Test: blockdev write zeroes read split ...passed 00:05:55.861 Test: blockdev write zeroes read split partial ...passed 00:05:55.861 Test: blockdev reset ...[2024-11-29 02:53:11.680041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:55.861 passed 00:05:55.861 Test: blockdev write read 8 blocks ...[2024-11-29 02:53:11.683897] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:55.861 passed 00:05:55.861 Test: blockdev write read size > 128k ...passed 00:05:55.861 Test: blockdev write read invalid size ...passed 00:05:55.861 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:55.861 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:55.861 Test: blockdev write read max offset ...passed 00:05:55.861 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:55.861 Test: blockdev writev readv 8 blocks ...passed 00:05:55.861 Test: blockdev writev readv 30 x 1block ...passed 00:05:55.861 Test: blockdev writev readv block ...passed 00:05:55.861 Test: blockdev writev readv size > 128k ...passed 00:05:55.861 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:55.861 Test: blockdev comparev and writev ...[2024-11-29 02:53:11.700265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc406000 len:0x1000 00:05:55.861 [2024-11-29 02:53:11.700331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:55.861 passed 00:05:55.861 Test: blockdev nvme passthru rw ...passed 00:05:55.861 Test: blockdev nvme passthru vendor specific ...passed 00:05:55.861 Test: blockdev nvme admin passthru ...[2024-11-29 02:53:11.702746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:55.861 [2024-11-29 02:53:11.702794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:55.861 passed 00:05:55.861 Test: blockdev copy ...passed 00:05:55.861 Suite: bdevio tests on: Nvme2n3 00:05:55.861 Test: blockdev write read block ...passed 00:05:55.861 Test: blockdev write zeroes read block ...passed 00:05:55.861 Test: blockdev write zeroes read no split ...passed 00:05:55.861 Test: blockdev write zeroes read split ...passed 00:05:55.861 Test: blockdev write zeroes read split partial ...passed 00:05:55.861 Test: blockdev reset ...[2024-11-29 02:53:11.733705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:55.861 [2024-11-29 02:53:11.736751] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:55.861 Test: blockdev write read 8 blocks ...uccessful. 00:05:55.861 passed 00:05:55.861 Test: blockdev write read size > 128k ...passed 00:05:55.861 Test: blockdev write read invalid size ...passed 00:05:55.861 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:55.861 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:55.861 Test: blockdev write read max offset ...passed 00:05:55.861 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:55.861 Test: blockdev writev readv 8 blocks ...passed 00:05:55.861 Test: blockdev writev readv 30 x 1block ...passed 00:05:55.861 Test: blockdev writev readv block ...passed 00:05:55.861 Test: blockdev writev readv size > 128k ...passed 00:05:55.861 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:55.861 Test: blockdev comparev and writev ...[2024-11-29 02:53:11.753635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c8c02000 len:0x1000 00:05:55.861 [2024-11-29 02:53:11.753694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:55.861 passed 00:05:55.861 Test: blockdev nvme passthru rw ...passed 00:05:55.861 Test: blockdev nvme passthru vendor specific ...passed 00:05:55.861 Test: blockdev nvme admin passthru ...[2024-11-29 02:53:11.756041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:55.861 [2024-11-29 02:53:11.756085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:55.861 passed 00:05:55.861 Test: blockdev copy ...passed 00:05:55.861 Suite: bdevio tests on: Nvme2n2 00:05:55.861 Test: blockdev write read block ...passed 00:05:55.861 Test: blockdev write zeroes read block ...passed 00:05:55.861 Test: blockdev write zeroes read no split ...passed 00:05:55.861 Test: blockdev write zeroes read split ...passed 00:05:55.861 Test: blockdev write zeroes read split partial ...passed 00:05:55.861 Test: blockdev reset ...[2024-11-29 02:53:11.787595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:55.861 [2024-11-29 02:53:11.790199] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:05:55.861 00:05:55.861 Test: blockdev write read 8 blocks ...passed 00:05:55.861 Test: blockdev write read size > 128k ...passed 00:05:55.861 Test: blockdev write read invalid size ...passed 00:05:55.861 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:55.861 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:55.861 Test: blockdev write read max offset ...passed 00:05:55.861 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:55.862 Test: blockdev writev readv 8 blocks ...passed 00:05:55.862 Test: blockdev writev readv 30 x 1block ...passed 00:05:55.862 Test: blockdev writev readv block ...passed 00:05:55.862 Test: blockdev writev readv size > 128k ...passed 00:05:55.862 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:55.862 Test: blockdev comparev and writev ...[2024-11-29 02:53:11.807099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e003b000 len:0x1000 00:05:55.862 [2024-11-29 02:53:11.807156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:55.862 passed 00:05:55.862 Test: blockdev nvme passthru rw ...passed 00:05:55.862 Test: blockdev nvme passthru vendor specific ...[2024-11-29 02:53:11.809413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:05:55.862 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:05:55.862 [2024-11-29 02:53:11.809565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:55.862 passed 00:05:55.862 Test: blockdev copy ...passed 00:05:55.862 Suite: bdevio tests on: Nvme2n1 00:05:55.862 Test: blockdev write read block ...passed 00:05:55.862 Test: blockdev write zeroes read block ...passed 00:05:55.862 Test: blockdev write zeroes read no split ...passed 00:05:55.862 Test: blockdev write zeroes read split ...passed 00:05:55.862 Test: blockdev write zeroes read split partial ...passed 00:05:55.862 Test: blockdev reset ...[2024-11-29 02:53:11.842140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:55.862 [2024-11-29 02:53:11.846269] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:55.862 Test: blockdev write read 8 blocks ...uccessful. 00:05:55.862 passed 00:05:55.862 Test: blockdev write read size > 128k ...passed 00:05:55.862 Test: blockdev write read invalid size ...passed 00:05:56.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:56.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:56.121 Test: blockdev write read max offset ...passed 00:05:56.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:56.121 Test: blockdev writev readv 8 blocks ...passed 00:05:56.121 Test: blockdev writev readv 30 x 1block ...passed 00:05:56.121 Test: blockdev writev readv block ...passed 00:05:56.121 Test: blockdev writev readv size > 128k ...passed 00:05:56.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:56.121 Test: blockdev comparev and writev ...[2024-11-29 02:53:11.865703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e0037000 len:0x1000 00:05:56.121 [2024-11-29 02:53:11.865758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:56.121 passed 00:05:56.121 Test: blockdev nvme passthru rw ...passed 00:05:56.121 Test: blockdev nvme passthru vendor specific ...passed 00:05:56.121 Test: blockdev nvme admin passthru ...[2024-11-29 02:53:11.867683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:56.121 [2024-11-29 02:53:11.867740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:56.121 passed 00:05:56.121 Test: blockdev copy ...passed 00:05:56.121 Suite: bdevio tests on: Nvme1n1 00:05:56.121 Test: blockdev write read block ...passed 00:05:56.121 Test: blockdev write zeroes read block ...passed 00:05:56.121 Test: blockdev write zeroes read no split ...passed 00:05:56.121 Test: blockdev write zeroes read split ...passed 00:05:56.121 Test: blockdev write zeroes read split partial ...passed 00:05:56.121 Test: blockdev reset ...[2024-11-29 02:53:11.894984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:56.121 [2024-11-29 02:53:11.899951] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:05:56.121 passed 00:05:56.121 Test: blockdev write read 8 blocks ...passed 00:05:56.121 Test: blockdev write read size > 128k ...passed 00:05:56.121 Test: blockdev write read invalid size ...passed 00:05:56.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:56.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:56.121 Test: blockdev write read max offset ...passed 00:05:56.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:56.121 Test: blockdev writev readv 8 blocks ...passed 00:05:56.121 Test: blockdev writev readv 30 x 1block ...passed 00:05:56.121 Test: blockdev writev readv block ...passed 00:05:56.121 Test: blockdev writev readv size > 128k ...passed 00:05:56.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:56.121 Test: blockdev comparev and writev ...[2024-11-29 02:53:11.916328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e0033000 len:0x1000 00:05:56.121 [2024-11-29 02:53:11.916447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:56.121 passed 00:05:56.121 Test: blockdev nvme passthru rw ...passed 00:05:56.121 Test: blockdev nvme passthru vendor specific ...[2024-11-29 02:53:11.919123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:05:56.121 Test: blockdev nvme admin passthru ...RP2 0x0 00:05:56.121 [2024-11-29 02:53:11.919376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:56.121 passed 00:05:56.121 Test: blockdev copy ...passed 00:05:56.121 Suite: bdevio tests on: Nvme0n1 00:05:56.121 Test: blockdev write read block ...passed 00:05:56.121 Test: blockdev write zeroes read block ...passed 00:05:56.121 Test: blockdev write zeroes read no split ...passed 00:05:56.121 Test: blockdev write zeroes read split ...passed 00:05:56.121 Test: blockdev write zeroes read split partial ...passed 00:05:56.121 Test: blockdev reset ...[2024-11-29 02:53:11.952236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:56.121 [2024-11-29 02:53:11.956656] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:05:56.121 Test: blockdev write read 8 blocks ...uccessful. 00:05:56.121 passed 00:05:56.121 Test: blockdev write read size > 128k ...passed 00:05:56.121 Test: blockdev write read invalid size ...passed 00:05:56.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:56.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:56.121 Test: blockdev write read max offset ...passed 00:05:56.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:56.121 Test: blockdev writev readv 8 blocks ...passed 00:05:56.121 Test: blockdev writev readv 30 x 1block ...passed 00:05:56.121 Test: blockdev writev readv block ...passed 00:05:56.121 Test: blockdev writev readv size > 128k ...passed 00:05:56.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:56.121 Test: blockdev comparev and writev ...passed 00:05:56.121 Test: blockdev nvme passthru rw ...[2024-11-29 02:53:11.971758] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:56.121 separate metadata which is not supported yet. 00:05:56.121 passed 00:05:56.121 Test: blockdev nvme passthru vendor specific ...passed 00:05:56.121 Test: blockdev nvme admin passthru ...[2024-11-29 02:53:11.973618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:56.121 [2024-11-29 02:53:11.973685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:56.121 passed 00:05:56.121 Test: blockdev copy ...passed 00:05:56.121 00:05:56.121 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.121 suites 6 6 n/a 0 0 00:05:56.121 tests 138 138 138 0 0 00:05:56.121 asserts 893 893 893 0 n/a 00:05:56.121 00:05:56.121 Elapsed time = 0.729 seconds 00:05:56.121 0 00:05:56.121 02:53:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71485 00:05:56.121 02:53:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71485 ']' 00:05:56.121 02:53:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71485 00:05:56.121 02:53:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:05:56.121 02:53:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.121 02:53:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71485 00:05:56.121 02:53:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.121 02:53:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.121 02:53:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71485' 00:05:56.121 killing process with pid 71485 00:05:56.121 02:53:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71485 00:05:56.121 02:53:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71485 00:05:56.380 02:53:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:56.380 00:05:56.380 real 0m1.562s 00:05:56.380 user 0m3.911s 00:05:56.380 sys 0m0.298s 00:05:56.380 02:53:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.380 ************************************ 00:05:56.380 END TEST bdev_bounds 00:05:56.380 ************************************ 00:05:56.380 02:53:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:56.380 02:53:12 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:56.380 02:53:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:56.380 02:53:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.380 02:53:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:56.380 ************************************ 00:05:56.380 START TEST bdev_nbd 00:05:56.380 ************************************ 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:56.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71539 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71539 /var/tmp/spdk-nbd.sock 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 71539 ']' 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:56.380 02:53:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:56.380 [2024-11-29 02:53:12.346802] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:56.380 [2024-11-29 02:53:12.347167] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:56.639 [2024-11-29 02:53:12.498142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.639 [2024-11-29 02:53:12.530743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.580 1+0 records in 00:05:57.580 1+0 records out 00:05:57.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000944644 s, 4.3 MB/s 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.580 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:57.581 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.581 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.581 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:57.581 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:57.581 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:57.581 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.840 1+0 records in 00:05:57.840 1+0 records out 00:05:57.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429287 s, 9.5 MB/s 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:57.840 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:58.097 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.098 1+0 records in 00:05:58.098 1+0 records out 00:05:58.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705733 s, 5.8 MB/s 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:58.098 02:53:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.355 1+0 records in 00:05:58.355 1+0 records out 00:05:58.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115223 s, 3.6 MB/s 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:58.355 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.615 1+0 records in 00:05:58.615 1+0 records out 00:05:58.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108336 s, 3.8 MB/s 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:58.615 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.874 1+0 records in 00:05:58.874 1+0 records out 00:05:58.874 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133449 s, 3.1 MB/s 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:58.874 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.133 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:59.133 { 00:05:59.133 "nbd_device": "/dev/nbd0", 00:05:59.133 "bdev_name": "Nvme0n1" 00:05:59.133 }, 00:05:59.133 { 00:05:59.133 "nbd_device": "/dev/nbd1", 00:05:59.133 "bdev_name": "Nvme1n1" 00:05:59.133 }, 00:05:59.133 { 00:05:59.133 "nbd_device": "/dev/nbd2", 00:05:59.133 "bdev_name": "Nvme2n1" 00:05:59.133 }, 00:05:59.133 { 00:05:59.133 "nbd_device": "/dev/nbd3", 00:05:59.133 "bdev_name": "Nvme2n2" 00:05:59.133 }, 00:05:59.133 { 00:05:59.133 "nbd_device": "/dev/nbd4", 00:05:59.133 "bdev_name": "Nvme2n3" 00:05:59.133 }, 00:05:59.133 { 00:05:59.133 "nbd_device": "/dev/nbd5", 00:05:59.133 "bdev_name": "Nvme3n1" 00:05:59.133 } 00:05:59.133 ]' 00:05:59.133 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:59.133 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:59.133 { 00:05:59.133 "nbd_device": "/dev/nbd0", 00:05:59.133 "bdev_name": "Nvme0n1" 00:05:59.133 }, 00:05:59.133 { 00:05:59.133 "nbd_device": "/dev/nbd1", 00:05:59.133 "bdev_name": "Nvme1n1" 00:05:59.133 }, 00:05:59.133 { 00:05:59.133 "nbd_device": "/dev/nbd2", 00:05:59.133 "bdev_name": "Nvme2n1" 00:05:59.133 }, 00:05:59.133 { 00:05:59.133 "nbd_device": "/dev/nbd3", 00:05:59.133 "bdev_name": "Nvme2n2" 00:05:59.133 }, 00:05:59.133 { 00:05:59.133 "nbd_device": "/dev/nbd4", 00:05:59.133 "bdev_name": "Nvme2n3" 00:05:59.133 }, 00:05:59.133 { 00:05:59.133 "nbd_device": "/dev/nbd5", 00:05:59.133 "bdev_name": "Nvme3n1" 00:05:59.133 } 00:05:59.133 ]' 00:05:59.133 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:59.133 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:59.133 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.133 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:59.133 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.133 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:59.134 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.134 02:53:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.393 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.393 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.393 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.393 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.393 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.393 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.393 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.393 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.393 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.393 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.654 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:59.915 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:59.915 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:59.915 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:59.915 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.915 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.915 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:59.915 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.915 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.915 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.915 02:53:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:00.175 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:00.175 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:00.175 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:00.175 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.175 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.175 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:00.175 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:00.175 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.175 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.175 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:00.435 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:00.435 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:00.435 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:00.435 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.435 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.435 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:00.435 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:00.435 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.435 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.435 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.435 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:00.695 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:00.955 /dev/nbd0 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:00.955 1+0 records in 00:06:00.955 1+0 records out 00:06:00.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116394 s, 3.5 MB/s 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:00.955 02:53:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:01.217 /dev/nbd1 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.217 1+0 records in 00:06:01.217 1+0 records out 00:06:01.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127356 s, 3.2 MB/s 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:01.217 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:01.478 /dev/nbd10 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.478 1+0 records in 00:06:01.478 1+0 records out 00:06:01.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106474 s, 3.8 MB/s 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:01.478 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:01.739 /dev/nbd11 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.739 1+0 records in 00:06:01.739 1+0 records out 00:06:01.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000974655 s, 4.2 MB/s 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:01.739 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:01.999 /dev/nbd12 00:06:01.999 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:01.999 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:01.999 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:01.999 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:01.999 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.999 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.999 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:01.999 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:01.999 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.999 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.999 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.999 1+0 records in 00:06:02.000 1+0 records out 00:06:02.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121237 s, 3.4 MB/s 00:06:02.000 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.000 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:02.000 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.000 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.000 02:53:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:02.000 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.000 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:02.000 02:53:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:02.272 /dev/nbd13 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:02.272 1+0 records in 00:06:02.272 1+0 records out 00:06:02.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133333 s, 3.1 MB/s 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.272 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.564 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.564 { 00:06:02.564 "nbd_device": "/dev/nbd0", 00:06:02.564 "bdev_name": "Nvme0n1" 00:06:02.564 }, 00:06:02.564 { 00:06:02.564 "nbd_device": "/dev/nbd1", 00:06:02.564 "bdev_name": "Nvme1n1" 00:06:02.564 }, 00:06:02.564 { 00:06:02.564 "nbd_device": "/dev/nbd10", 00:06:02.564 "bdev_name": "Nvme2n1" 00:06:02.564 }, 00:06:02.564 { 00:06:02.564 "nbd_device": "/dev/nbd11", 00:06:02.564 "bdev_name": "Nvme2n2" 00:06:02.564 }, 00:06:02.564 { 00:06:02.564 "nbd_device": "/dev/nbd12", 00:06:02.564 "bdev_name": "Nvme2n3" 00:06:02.564 }, 00:06:02.564 { 00:06:02.564 "nbd_device": "/dev/nbd13", 00:06:02.564 "bdev_name": "Nvme3n1" 00:06:02.564 } 00:06:02.564 ]' 00:06:02.564 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.564 { 00:06:02.564 "nbd_device": "/dev/nbd0", 00:06:02.564 "bdev_name": "Nvme0n1" 00:06:02.564 }, 00:06:02.564 { 00:06:02.564 "nbd_device": "/dev/nbd1", 00:06:02.564 "bdev_name": "Nvme1n1" 00:06:02.564 }, 00:06:02.564 { 00:06:02.564 "nbd_device": "/dev/nbd10", 00:06:02.564 "bdev_name": "Nvme2n1" 00:06:02.564 }, 00:06:02.564 { 00:06:02.564 "nbd_device": "/dev/nbd11", 00:06:02.564 "bdev_name": "Nvme2n2" 00:06:02.564 }, 00:06:02.564 { 00:06:02.564 "nbd_device": "/dev/nbd12", 00:06:02.564 "bdev_name": "Nvme2n3" 00:06:02.564 }, 00:06:02.564 { 00:06:02.564 "nbd_device": "/dev/nbd13", 00:06:02.564 "bdev_name": "Nvme3n1" 00:06:02.564 } 00:06:02.564 ]' 00:06:02.564 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.564 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.564 /dev/nbd1 00:06:02.564 /dev/nbd10 00:06:02.564 /dev/nbd11 00:06:02.564 /dev/nbd12 00:06:02.564 /dev/nbd13' 00:06:02.564 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.564 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.564 /dev/nbd1 00:06:02.564 /dev/nbd10 00:06:02.564 /dev/nbd11 00:06:02.564 /dev/nbd12 00:06:02.564 /dev/nbd13' 00:06:02.564 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:02.564 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:02.565 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:02.565 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:02.565 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:02.565 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:02.565 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.565 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.565 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:02.565 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.565 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:02.565 256+0 records in 00:06:02.565 256+0 records out 00:06:02.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00777274 s, 135 MB/s 00:06:02.565 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.565 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.826 256+0 records in 00:06:02.826 256+0 records out 00:06:02.826 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.227279 s, 4.6 MB/s 00:06:02.826 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.826 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.826 256+0 records in 00:06:02.826 256+0 records out 00:06:02.826 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175065 s, 6.0 MB/s 00:06:02.826 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.826 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:03.088 256+0 records in 00:06:03.088 256+0 records out 00:06:03.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.2135 s, 4.9 MB/s 00:06:03.088 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.088 02:53:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:03.349 256+0 records in 00:06:03.349 256+0 records out 00:06:03.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.193565 s, 5.4 MB/s 00:06:03.349 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.349 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:03.610 256+0 records in 00:06:03.610 256+0 records out 00:06:03.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.205054 s, 5.1 MB/s 00:06:03.610 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.610 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:03.610 256+0 records in 00:06:03.610 256+0 records out 00:06:03.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149763 s, 7.0 MB/s 00:06:03.610 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:03.610 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:03.610 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.610 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.610 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:03.610 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.610 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.611 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.872 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.872 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.872 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.872 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.872 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.872 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.872 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.872 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.872 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.872 02:53:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.131 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.131 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.131 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.131 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.131 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.131 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.131 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:04.131 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.131 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.131 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:04.390 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:04.390 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:04.390 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:04.390 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.390 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.390 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:04.390 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:04.390 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.390 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.390 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:04.647 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:04.647 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:04.647 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:04.647 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.647 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.647 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:04.647 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:04.647 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.647 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.647 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.906 02:53:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:05.165 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:05.422 malloc_lvol_verify 00:06:05.422 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:05.683 de0e1bd6-36c6-453c-9b07-069c811a5ae6 00:06:05.683 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:05.944 9a2d7db0-09f6-4bcb-923f-fe83d2dab2de 00:06:05.944 02:53:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:06.205 /dev/nbd0 00:06:06.205 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:06.205 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:06.205 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:06.205 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:06.205 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:06.205 mke2fs 1.47.0 (5-Feb-2023) 00:06:06.205 Discarding device blocks: 0/4096 done 00:06:06.205 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:06.205 00:06:06.205 Allocating group tables: 0/1 done 00:06:06.205 Writing inode tables: 0/1 done 00:06:06.205 Creating journal (1024 blocks): done 00:06:06.205 Writing superblocks and filesystem accounting information: 0/1 done 00:06:06.205 00:06:06.205 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:06.205 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.205 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:06.205 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.205 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:06.205 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.205 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71539 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 71539 ']' 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 71539 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71539 00:06:06.467 killing process with pid 71539 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71539' 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 71539 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 71539 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:06.467 00:06:06.467 real 0m10.183s 00:06:06.467 user 0m14.272s 00:06:06.467 sys 0m3.574s 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.467 ************************************ 00:06:06.467 END TEST bdev_nbd 00:06:06.467 ************************************ 00:06:06.467 02:53:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:06.728 skipping fio tests on NVMe due to multi-ns failures. 00:06:06.728 02:53:22 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:06.728 02:53:22 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:06.728 02:53:22 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:06.728 02:53:22 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:06.728 02:53:22 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:06.728 02:53:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:06.728 02:53:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.728 02:53:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.728 ************************************ 00:06:06.728 START TEST bdev_verify 00:06:06.728 ************************************ 00:06:06.728 02:53:22 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:06.728 [2024-11-29 02:53:22.560590] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:06.728 [2024-11-29 02:53:22.560695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71920 ] 00:06:06.728 [2024-11-29 02:53:22.706805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.989 [2024-11-29 02:53:22.727067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.989 [2024-11-29 02:53:22.727143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.249 Running I/O for 5 seconds... 00:06:09.572 18752.00 IOPS, 73.25 MiB/s [2024-11-29T02:53:26.497Z] 19040.00 IOPS, 74.38 MiB/s [2024-11-29T02:53:27.449Z] 21205.33 IOPS, 82.83 MiB/s [2024-11-29T02:53:28.385Z] 21616.00 IOPS, 84.44 MiB/s [2024-11-29T02:53:28.385Z] 21388.80 IOPS, 83.55 MiB/s 00:06:12.393 Latency(us) 00:06:12.393 [2024-11-29T02:53:28.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:12.393 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:12.393 Verification LBA range: start 0x0 length 0xbd0bd 00:06:12.393 Nvme0n1 : 5.05 1671.28 6.53 0.00 0.00 76371.11 12603.08 93968.54 00:06:12.393 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:12.393 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:12.393 Nvme0n1 : 5.04 1830.13 7.15 0.00 0.00 69759.75 11191.53 100824.62 00:06:12.393 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:12.393 Verification LBA range: start 0x0 length 0xa0000 00:06:12.393 Nvme1n1 : 5.06 1670.68 6.53 0.00 0.00 76298.97 14619.57 87919.06 00:06:12.393 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:12.393 Verification LBA range: start 0xa0000 length 0xa0000 00:06:12.393 Nvme1n1 : 5.04 1829.60 7.15 0.00 0.00 69529.46 11897.30 85095.98 00:06:12.393 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:12.393 Verification LBA range: start 0x0 length 0x80000 00:06:12.393 Nvme2n1 : 5.06 1670.12 6.52 0.00 0.00 76183.07 14619.57 80659.69 00:06:12.393 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:12.393 Verification LBA range: start 0x80000 length 0x80000 00:06:12.393 Nvme2n1 : 5.07 1844.14 7.20 0.00 0.00 68666.12 8822.15 68964.04 00:06:12.393 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:12.394 Verification LBA range: start 0x0 length 0x80000 00:06:12.394 Nvme2n2 : 5.06 1669.64 6.52 0.00 0.00 75829.52 14317.10 70577.23 00:06:12.394 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:12.394 Verification LBA range: start 0x80000 length 0x80000 00:06:12.394 Nvme2n2 : 5.07 1843.76 7.20 0.00 0.00 68511.48 8469.27 62511.26 00:06:12.394 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:12.394 Verification LBA range: start 0x0 length 0x80000 00:06:12.394 Nvme2n3 : 5.08 1686.68 6.59 0.00 0.00 74989.47 5772.21 72593.72 00:06:12.394 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:12.394 Verification LBA range: start 0x80000 length 0x80000 00:06:12.394 Nvme2n3 : 5.08 1864.20 7.28 0.00 0.00 67682.41 4108.60 65334.35 00:06:12.394 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:12.394 Verification LBA range: start 0x0 length 0x20000 00:06:12.394 Nvme3n1 : 5.09 1686.23 6.59 0.00 0.00 74867.52 5847.83 73803.62 00:06:12.394 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:12.394 Verification LBA range: start 0x20000 length 0x20000 00:06:12.394 Nvme3n1 : 5.08 1863.83 7.28 0.00 0.00 67617.35 4310.25 66947.54 00:06:12.394 [2024-11-29T02:53:28.386Z] =================================================================================================================== 00:06:12.394 [2024-11-29T02:53:28.386Z] Total : 21130.29 82.54 0.00 0.00 72014.89 4108.60 100824.62 00:06:12.963 00:06:12.963 real 0m6.164s 00:06:12.963 user 0m11.659s 00:06:12.963 sys 0m0.191s 00:06:12.963 02:53:28 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.963 02:53:28 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:12.963 ************************************ 00:06:12.963 END TEST bdev_verify 00:06:12.963 ************************************ 00:06:12.963 02:53:28 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:12.963 02:53:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:12.963 02:53:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.963 02:53:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.963 ************************************ 00:06:12.963 START TEST bdev_verify_big_io 00:06:12.963 ************************************ 00:06:12.963 02:53:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:12.963 [2024-11-29 02:53:28.791586] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:12.963 [2024-11-29 02:53:28.791697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72007 ] 00:06:12.963 [2024-11-29 02:53:28.935346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.220 [2024-11-29 02:53:28.956309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.220 [2024-11-29 02:53:28.956464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.477 Running I/O for 5 seconds... 00:06:17.395 1219.00 IOPS, 76.19 MiB/s [2024-11-29T02:53:35.295Z] 2064.00 IOPS, 129.00 MiB/s [2024-11-29T02:53:35.556Z] 2204.33 IOPS, 137.77 MiB/s 00:06:19.564 Latency(us) 00:06:19.564 [2024-11-29T02:53:35.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:19.564 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:19.564 Verification LBA range: start 0x0 length 0xbd0b 00:06:19.564 Nvme0n1 : 5.74 122.64 7.67 0.00 0.00 993213.51 12351.02 1064707.94 00:06:19.564 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.564 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:19.564 Nvme0n1 : 5.68 135.24 8.45 0.00 0.00 913455.52 14720.39 884030.23 00:06:19.564 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:19.564 Verification LBA range: start 0x0 length 0xa000 00:06:19.564 Nvme1n1 : 5.63 124.94 7.81 0.00 0.00 959787.90 107277.39 903388.55 00:06:19.564 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.564 Verification LBA range: start 0xa000 length 0xa000 00:06:19.564 Nvme1n1 : 5.68 135.17 8.45 0.00 0.00 880772.86 79046.50 825955.25 00:06:19.564 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:19.564 Verification LBA range: start 0x0 length 0x8000 00:06:19.564 Nvme2n1 : 5.74 126.58 7.91 0.00 0.00 917183.36 104051.00 1013085.74 00:06:19.564 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.564 Verification LBA range: start 0x8000 length 0x8000 00:06:19.564 Nvme2n1 : 5.77 137.13 8.57 0.00 0.00 847089.67 83886.08 851766.35 00:06:19.564 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:19.564 Verification LBA range: start 0x0 length 0x8000 00:06:19.564 Nvme2n2 : 5.85 134.23 8.39 0.00 0.00 842419.20 17845.96 1374441.16 00:06:19.564 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.564 Verification LBA range: start 0x8000 length 0x8000 00:06:19.564 Nvme2n2 : 5.81 143.27 8.95 0.00 0.00 800773.09 35086.97 884030.23 00:06:19.564 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:19.564 Verification LBA range: start 0x0 length 0x8000 00:06:19.564 Nvme2n3 : 5.87 135.35 8.46 0.00 0.00 808574.94 20971.52 2000360.37 00:06:19.564 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.564 Verification LBA range: start 0x8000 length 0x8000 00:06:19.564 Nvme2n3 : 5.82 150.24 9.39 0.00 0.00 749963.25 8721.33 909841.33 00:06:19.564 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:19.564 Verification LBA range: start 0x0 length 0x2000 00:06:19.565 Nvme3n1 : 5.90 159.84 9.99 0.00 0.00 668177.04 1140.58 2039077.02 00:06:19.565 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.565 Verification LBA range: start 0x2000 length 0x2000 00:06:19.565 Nvme3n1 : 5.83 153.77 9.61 0.00 0.00 716220.69 5721.80 929199.66 00:06:19.565 [2024-11-29T02:53:35.557Z] =================================================================================================================== 00:06:19.565 [2024-11-29T02:53:35.557Z] Total : 1658.39 103.65 0.00 0.00 833140.08 1140.58 2039077.02 00:06:20.135 00:06:20.135 real 0m7.157s 00:06:20.135 user 0m13.622s 00:06:20.135 sys 0m0.204s 00:06:20.135 02:53:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.135 ************************************ 00:06:20.135 END TEST bdev_verify_big_io 00:06:20.135 ************************************ 00:06:20.135 02:53:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:20.135 02:53:35 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:20.135 02:53:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:20.135 02:53:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.135 02:53:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:20.135 ************************************ 00:06:20.135 START TEST bdev_write_zeroes 00:06:20.135 ************************************ 00:06:20.135 02:53:35 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:20.135 [2024-11-29 02:53:36.021370] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:20.135 [2024-11-29 02:53:36.021481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72100 ] 00:06:20.396 [2024-11-29 02:53:36.167048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.396 [2024-11-29 02:53:36.186729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.657 Running I/O for 1 seconds... 00:06:21.651 57216.00 IOPS, 223.50 MiB/s 00:06:21.651 Latency(us) 00:06:21.651 [2024-11-29T02:53:37.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:21.651 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:21.651 Nvme0n1 : 1.02 9543.51 37.28 0.00 0.00 13383.17 7461.02 22483.89 00:06:21.651 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:21.651 Nvme1n1 : 1.02 9532.11 37.23 0.00 0.00 13384.94 9830.40 22282.24 00:06:21.651 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:21.651 Nvme2n1 : 1.02 9521.34 37.19 0.00 0.00 13333.58 9779.99 22887.19 00:06:21.651 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:21.651 Nvme2n2 : 1.02 9510.63 37.15 0.00 0.00 13308.01 8973.39 23592.96 00:06:21.651 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:21.651 Nvme2n3 : 1.02 9499.89 37.11 0.00 0.00 13281.43 7410.61 22887.19 00:06:21.651 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:21.651 Nvme3n1 : 1.03 9489.00 37.07 0.00 0.00 13265.32 6175.51 21576.47 00:06:21.651 [2024-11-29T02:53:37.643Z] =================================================================================================================== 00:06:21.651 [2024-11-29T02:53:37.643Z] Total : 57096.47 223.03 0.00 0.00 13326.07 6175.51 23592.96 00:06:21.913 00:06:21.913 real 0m1.814s 00:06:21.913 user 0m1.519s 00:06:21.913 sys 0m0.181s 00:06:21.913 02:53:37 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.913 ************************************ 00:06:21.913 END TEST bdev_write_zeroes 00:06:21.913 ************************************ 00:06:21.913 02:53:37 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:21.913 02:53:37 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:21.913 02:53:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:21.913 02:53:37 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.913 02:53:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.913 ************************************ 00:06:21.913 START TEST bdev_json_nonenclosed 00:06:21.913 ************************************ 00:06:21.913 02:53:37 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:21.913 [2024-11-29 02:53:37.882943] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:21.913 [2024-11-29 02:53:37.883061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72142 ] 00:06:22.173 [2024-11-29 02:53:38.027183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.173 [2024-11-29 02:53:38.046058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.173 [2024-11-29 02:53:38.046137] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:22.173 [2024-11-29 02:53:38.046152] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:22.173 [2024-11-29 02:53:38.046165] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.173 00:06:22.173 real 0m0.280s 00:06:22.173 user 0m0.103s 00:06:22.173 sys 0m0.074s 00:06:22.173 02:53:38 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.173 ************************************ 00:06:22.173 END TEST bdev_json_nonenclosed 00:06:22.173 ************************************ 00:06:22.173 02:53:38 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:22.173 02:53:38 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:22.173 02:53:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:22.173 02:53:38 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.173 02:53:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.173 ************************************ 00:06:22.173 START TEST bdev_json_nonarray 00:06:22.173 ************************************ 00:06:22.173 02:53:38 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:22.433 [2024-11-29 02:53:38.220967] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:22.434 [2024-11-29 02:53:38.221078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72167 ] 00:06:22.434 [2024-11-29 02:53:38.364106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.434 [2024-11-29 02:53:38.383505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.434 [2024-11-29 02:53:38.383593] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:22.434 [2024-11-29 02:53:38.383608] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:22.434 [2024-11-29 02:53:38.383619] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.720 00:06:22.720 real 0m0.284s 00:06:22.720 user 0m0.108s 00:06:22.720 sys 0m0.072s 00:06:22.720 02:53:38 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.720 ************************************ 00:06:22.720 END TEST bdev_json_nonarray 00:06:22.720 ************************************ 00:06:22.720 02:53:38 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:22.720 02:53:38 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:22.720 02:53:38 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:22.720 02:53:38 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:22.720 02:53:38 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:22.720 02:53:38 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:22.720 02:53:38 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:22.720 02:53:38 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:22.720 02:53:38 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:22.720 02:53:38 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:22.720 02:53:38 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:22.720 02:53:38 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:22.720 00:06:22.720 real 0m30.631s 00:06:22.720 user 0m47.653s 00:06:22.720 sys 0m5.582s 00:06:22.720 ************************************ 00:06:22.720 END TEST blockdev_nvme 00:06:22.720 ************************************ 00:06:22.720 02:53:38 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.720 02:53:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.720 02:53:38 -- spdk/autotest.sh@209 -- # uname -s 00:06:22.720 02:53:38 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:22.720 02:53:38 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:22.720 02:53:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:22.720 02:53:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.720 02:53:38 -- common/autotest_common.sh@10 -- # set +x 00:06:22.720 ************************************ 00:06:22.720 START TEST blockdev_nvme_gpt 00:06:22.720 ************************************ 00:06:22.720 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:22.720 * Looking for test storage... 00:06:22.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:22.720 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.720 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.720 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.720 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.720 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.720 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.720 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.720 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.720 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.720 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.720 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.720 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.720 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.720 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.720 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.720 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:22.720 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:22.721 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.721 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.721 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:22.721 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:22.721 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.721 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:22.721 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.721 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:22.721 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:22.721 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.721 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:22.721 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.721 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.721 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.721 02:53:38 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:22.721 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.721 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.721 --rc genhtml_branch_coverage=1 00:06:22.721 --rc genhtml_function_coverage=1 00:06:22.721 --rc genhtml_legend=1 00:06:22.721 --rc geninfo_all_blocks=1 00:06:22.721 --rc geninfo_unexecuted_blocks=1 00:06:22.721 00:06:22.721 ' 00:06:22.721 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.721 --rc genhtml_branch_coverage=1 00:06:22.721 --rc genhtml_function_coverage=1 00:06:22.721 --rc genhtml_legend=1 00:06:22.721 --rc geninfo_all_blocks=1 00:06:22.721 --rc geninfo_unexecuted_blocks=1 00:06:22.721 00:06:22.721 ' 00:06:22.721 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.721 --rc genhtml_branch_coverage=1 00:06:22.721 --rc genhtml_function_coverage=1 00:06:22.721 --rc genhtml_legend=1 00:06:22.721 --rc geninfo_all_blocks=1 00:06:22.721 --rc geninfo_unexecuted_blocks=1 00:06:22.721 00:06:22.721 ' 00:06:22.721 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.721 --rc genhtml_branch_coverage=1 00:06:22.721 --rc genhtml_function_coverage=1 00:06:22.721 --rc genhtml_legend=1 00:06:22.721 --rc geninfo_all_blocks=1 00:06:22.721 --rc geninfo_unexecuted_blocks=1 00:06:22.721 00:06:22.721 ' 00:06:22.721 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:22.721 02:53:38 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:22.721 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:22.721 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:22.721 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:22.721 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:22.721 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:22.721 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:22.721 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:22.721 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:22.721 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:22.721 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:22.721 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:22.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.981 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:22.981 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:22.981 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:22.981 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:22.981 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:22.981 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:22.981 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:22.981 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:22.981 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:22.981 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:22.981 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:22.981 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72240 00:06:22.981 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:22.981 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72240 00:06:22.981 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72240 ']' 00:06:22.981 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.981 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.981 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.981 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.981 02:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:22.981 02:53:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:22.981 [2024-11-29 02:53:38.782689] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:22.982 [2024-11-29 02:53:38.782818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72240 ] 00:06:22.982 [2024-11-29 02:53:38.927948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.982 [2024-11-29 02:53:38.947632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.918 02:53:39 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.918 02:53:39 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:23.918 02:53:39 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:23.918 02:53:39 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:23.918 02:53:39 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:23.918 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:24.178 Waiting for block devices as requested 00:06:24.178 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:24.178 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:24.439 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:24.439 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:29.730 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:29.730 BYT; 00:06:29.730 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:29.730 BYT; 00:06:29.730 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:29.730 02:53:45 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:29.730 02:53:45 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:30.673 The operation has completed successfully. 00:06:30.673 02:53:46 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:31.607 The operation has completed successfully. 00:06:31.607 02:53:47 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:32.172 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:32.429 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:32.429 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:32.429 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:32.686 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:32.686 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:32.686 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.686 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.686 [] 00:06:32.687 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.687 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:32.687 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:32.687 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:32.687 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:32.687 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:32.687 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.687 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.947 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.947 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:32.947 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.947 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.947 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.947 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:32.947 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:32.947 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.947 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.947 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.947 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:32.947 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.947 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.947 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.947 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:32.947 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.947 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.947 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.947 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:32.947 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:32.947 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.947 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.947 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:33.210 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.210 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:33.210 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:33.210 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0af860d2-1951-40fe-a175-032b497fb545"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0af860d2-1951-40fe-a175-032b497fb545",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "329fdd20-e557-4db1-bc92-c337f3670152"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "329fdd20-e557-4db1-bc92-c337f3670152",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "bee76a66-5290-466f-9179-d43173e2de29"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bee76a66-5290-466f-9179-d43173e2de29",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "fec544c8-667b-4cd4-a074-4a71c14b452d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fec544c8-667b-4cd4-a074-4a71c14b452d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "a4899fb2-6528-47cb-bfc1-ea321d5c486c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a4899fb2-6528-47cb-bfc1-ea321d5c486c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:33.210 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:33.210 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:33.210 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:33.210 02:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 72240 00:06:33.210 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72240 ']' 00:06:33.210 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72240 00:06:33.210 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:33.210 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.210 02:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72240 00:06:33.210 killing process with pid 72240 00:06:33.210 02:53:49 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.210 02:53:49 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.210 02:53:49 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72240' 00:06:33.210 02:53:49 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72240 00:06:33.210 02:53:49 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72240 00:06:33.472 02:53:49 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:33.472 02:53:49 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:33.472 02:53:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:33.472 02:53:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.472 02:53:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:33.472 ************************************ 00:06:33.472 START TEST bdev_hello_world 00:06:33.472 ************************************ 00:06:33.472 02:53:49 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:33.472 [2024-11-29 02:53:49.342536] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:33.472 [2024-11-29 02:53:49.342650] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72852 ] 00:06:33.731 [2024-11-29 02:53:49.488767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.731 [2024-11-29 02:53:49.507770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.992 [2024-11-29 02:53:49.879752] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:33.992 [2024-11-29 02:53:49.879799] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:33.992 [2024-11-29 02:53:49.879818] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:33.992 [2024-11-29 02:53:49.881898] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:33.992 [2024-11-29 02:53:49.882919] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:33.992 [2024-11-29 02:53:49.882950] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:33.992 [2024-11-29 02:53:49.883559] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:33.992 00:06:33.992 [2024-11-29 02:53:49.883585] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:34.254 00:06:34.254 real 0m0.744s 00:06:34.254 user 0m0.485s 00:06:34.254 sys 0m0.152s 00:06:34.254 02:53:50 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.254 ************************************ 00:06:34.254 END TEST bdev_hello_world 00:06:34.254 ************************************ 00:06:34.254 02:53:50 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:34.254 02:53:50 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:34.254 02:53:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:34.254 02:53:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.254 02:53:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:34.254 ************************************ 00:06:34.254 START TEST bdev_bounds 00:06:34.254 ************************************ 00:06:34.254 02:53:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:34.254 02:53:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72883 00:06:34.254 02:53:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.254 Process bdevio pid: 72883 00:06:34.254 02:53:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72883' 00:06:34.254 02:53:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72883 00:06:34.254 02:53:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72883 ']' 00:06:34.254 02:53:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.254 02:53:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.254 02:53:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.254 02:53:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.254 02:53:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:34.254 02:53:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:34.254 [2024-11-29 02:53:50.146619] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:34.254 [2024-11-29 02:53:50.146740] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72883 ] 00:06:34.515 [2024-11-29 02:53:50.291148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.515 [2024-11-29 02:53:50.314162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.515 [2024-11-29 02:53:50.314741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.515 [2024-11-29 02:53:50.314794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.130 02:53:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.130 02:53:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:35.130 02:53:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:35.130 I/O targets: 00:06:35.130 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:35.130 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:35.130 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:35.130 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:35.130 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:35.130 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:35.130 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:35.130 00:06:35.130 00:06:35.130 CUnit - A unit testing framework for C - Version 2.1-3 00:06:35.130 http://cunit.sourceforge.net/ 00:06:35.130 00:06:35.130 00:06:35.130 Suite: bdevio tests on: Nvme3n1 00:06:35.130 Test: blockdev write read block ...passed 00:06:35.130 Test: blockdev write zeroes read block ...passed 00:06:35.130 Test: blockdev write zeroes read no split ...passed 00:06:35.390 Test: blockdev write zeroes read split ...passed 00:06:35.390 Test: blockdev write zeroes read split partial ...passed 00:06:35.390 Test: blockdev reset ...[2024-11-29 02:53:51.135496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:35.390 [2024-11-29 02:53:51.139268] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:35.390 passed 00:06:35.390 Test: blockdev write read 8 blocks ...passed 00:06:35.390 Test: blockdev write read size > 128k ...passed 00:06:35.390 Test: blockdev write read invalid size ...passed 00:06:35.390 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.390 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.390 Test: blockdev write read max offset ...passed 00:06:35.390 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.390 Test: blockdev writev readv 8 blocks ...passed 00:06:35.390 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.390 Test: blockdev writev readv block ...passed 00:06:35.390 Test: blockdev writev readv size > 128k ...passed 00:06:35.390 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.390 Test: blockdev comparev and writev ...[2024-11-29 02:53:51.158615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c100e000 len:0x1000 00:06:35.390 [2024-11-29 02:53:51.158691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.390 passed 00:06:35.390 Test: blockdev nvme passthru rw ...passed 00:06:35.390 Test: blockdev nvme passthru vendor specific ...[2024-11-29 02:53:51.161581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:35.390 [2024-11-29 02:53:51.161629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:35.390 passed 00:06:35.390 Test: blockdev nvme admin passthru ...passed 00:06:35.390 Test: blockdev copy ...passed 00:06:35.390 Suite: bdevio tests on: Nvme2n3 00:06:35.390 Test: blockdev write read block ...passed 00:06:35.390 Test: blockdev write zeroes read block ...passed 00:06:35.390 Test: blockdev write zeroes read no split ...passed 00:06:35.390 Test: blockdev write zeroes read split ...passed 00:06:35.390 Test: blockdev write zeroes read split partial ...passed 00:06:35.390 Test: blockdev reset ...[2024-11-29 02:53:51.189684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:35.390 [2024-11-29 02:53:51.194226] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:35.390 passed 00:06:35.390 Test: blockdev write read 8 blocks ...passed 00:06:35.390 Test: blockdev write read size > 128k ...passed 00:06:35.390 Test: blockdev write read invalid size ...passed 00:06:35.390 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.390 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.390 Test: blockdev write read max offset ...passed 00:06:35.390 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.390 Test: blockdev writev readv 8 blocks ...passed 00:06:35.390 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.390 Test: blockdev writev readv block ...passed 00:06:35.390 Test: blockdev writev readv size > 128k ...passed 00:06:35.390 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.390 Test: blockdev comparev and writev ...[2024-11-29 02:53:51.212223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1008000 len:0x1000 00:06:35.390 [2024-11-29 02:53:51.212280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.390 passed 00:06:35.390 Test: blockdev nvme passthru rw ...passed 00:06:35.390 Test: blockdev nvme passthru vendor specific ...[2024-11-29 02:53:51.214756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:35.390 [2024-11-29 02:53:51.214792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:35.390 passed 00:06:35.390 Test: blockdev nvme admin passthru ...passed 00:06:35.390 Test: blockdev copy ...passed 00:06:35.390 Suite: bdevio tests on: Nvme2n2 00:06:35.390 Test: blockdev write read block ...passed 00:06:35.390 Test: blockdev write zeroes read block ...passed 00:06:35.390 Test: blockdev write zeroes read no split ...passed 00:06:35.390 Test: blockdev write zeroes read split ...passed 00:06:35.390 Test: blockdev write zeroes read split partial ...passed 00:06:35.390 Test: blockdev reset ...[2024-11-29 02:53:51.246128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:35.390 passed 00:06:35.390 Test: blockdev write read 8 blocks ...[2024-11-29 02:53:51.249560] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:35.390 passed 00:06:35.390 Test: blockdev write read size > 128k ...passed 00:06:35.390 Test: blockdev write read invalid size ...passed 00:06:35.391 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.391 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.391 Test: blockdev write read max offset ...passed 00:06:35.391 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.391 Test: blockdev writev readv 8 blocks ...passed 00:06:35.391 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.391 Test: blockdev writev readv block ...passed 00:06:35.391 Test: blockdev writev readv size > 128k ...passed 00:06:35.391 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.391 Test: blockdev comparev and writev ...[2024-11-29 02:53:51.267366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1002000 len:0x1000 00:06:35.391 [2024-11-29 02:53:51.267415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.391 passed 00:06:35.391 Test: blockdev nvme passthru rw ...passed 00:06:35.391 Test: blockdev nvme passthru vendor specific ...[2024-11-29 02:53:51.269677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:35.391 [2024-11-29 02:53:51.269713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:35.391 passed 00:06:35.391 Test: blockdev nvme admin passthru ...passed 00:06:35.391 Test: blockdev copy ...passed 00:06:35.391 Suite: bdevio tests on: Nvme2n1 00:06:35.391 Test: blockdev write read block ...passed 00:06:35.391 Test: blockdev write zeroes read block ...passed 00:06:35.391 Test: blockdev write zeroes read no split ...passed 00:06:35.391 Test: blockdev write zeroes read split ...passed 00:06:35.391 Test: blockdev write zeroes read split partial ...passed 00:06:35.391 Test: blockdev reset ...[2024-11-29 02:53:51.299040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:35.391 passed 00:06:35.391 Test: blockdev write read 8 blocks ...[2024-11-29 02:53:51.302388] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:35.391 passed 00:06:35.391 Test: blockdev write read size > 128k ...passed 00:06:35.391 Test: blockdev write read invalid size ...passed 00:06:35.391 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.391 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.391 Test: blockdev write read max offset ...passed 00:06:35.391 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.391 Test: blockdev writev readv 8 blocks ...passed 00:06:35.391 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.391 Test: blockdev writev readv block ...passed 00:06:35.391 Test: blockdev writev readv size > 128k ...passed 00:06:35.391 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.391 Test: blockdev comparev and writev ...[2024-11-29 02:53:51.319742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1404000 len:0x1000 00:06:35.391 [2024-11-29 02:53:51.319790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.391 passed 00:06:35.391 Test: blockdev nvme passthru rw ...passed 00:06:35.391 Test: blockdev nvme passthru vendor specific ...[2024-11-29 02:53:51.322063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:35.391 [2024-11-29 02:53:51.322099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:35.391 passed 00:06:35.391 Test: blockdev nvme admin passthru ...passed 00:06:35.391 Test: blockdev copy ...passed 00:06:35.391 Suite: bdevio tests on: Nvme1n1p2 00:06:35.391 Test: blockdev write read block ...passed 00:06:35.391 Test: blockdev write zeroes read block ...passed 00:06:35.391 Test: blockdev write zeroes read no split ...passed 00:06:35.391 Test: blockdev write zeroes read split ...passed 00:06:35.391 Test: blockdev write zeroes read split partial ...passed 00:06:35.391 Test: blockdev reset ...[2024-11-29 02:53:51.354719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:35.391 passed 00:06:35.391 Test: blockdev write read 8 blocks ...[2024-11-29 02:53:51.357885] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:35.391 passed 00:06:35.391 Test: blockdev write read size > 128k ...passed 00:06:35.391 Test: blockdev write read invalid size ...passed 00:06:35.391 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.391 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.391 Test: blockdev write read max offset ...passed 00:06:35.391 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.391 Test: blockdev writev readv 8 blocks ...passed 00:06:35.391 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.391 Test: blockdev writev readv block ...passed 00:06:35.391 Test: blockdev writev readv size > 128k ...passed 00:06:35.391 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.391 Test: blockdev comparev and writev ...[2024-11-29 02:53:51.377550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2de43d000 len:0x1000 00:06:35.391 [2024-11-29 02:53:51.377601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.391 passed 00:06:35.391 Test: blockdev nvme passthru rw ...passed 00:06:35.391 Test: blockdev nvme passthru vendor specific ...passed 00:06:35.391 Test: blockdev nvme admin passthru ...passed 00:06:35.652 Test: blockdev copy ...passed 00:06:35.652 Suite: bdevio tests on: Nvme1n1p1 00:06:35.652 Test: blockdev write read block ...passed 00:06:35.652 Test: blockdev write zeroes read block ...passed 00:06:35.652 Test: blockdev write zeroes read no split ...passed 00:06:35.652 Test: blockdev write zeroes read split ...passed 00:06:35.652 Test: blockdev write zeroes read split partial ...passed 00:06:35.652 Test: blockdev reset ...[2024-11-29 02:53:51.405801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:35.652 passed 00:06:35.652 Test: blockdev write read 8 blocks ...[2024-11-29 02:53:51.409089] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:35.652 passed 00:06:35.652 Test: blockdev write read size > 128k ...passed 00:06:35.652 Test: blockdev write read invalid size ...passed 00:06:35.652 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.652 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.652 Test: blockdev write read max offset ...passed 00:06:35.652 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.652 Test: blockdev writev readv 8 blocks ...passed 00:06:35.652 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.652 Test: blockdev writev readv block ...passed 00:06:35.652 Test: blockdev writev readv size > 128k ...passed 00:06:35.652 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.652 Test: blockdev comparev and writev ...[2024-11-29 02:53:51.427407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2de439000 len:0x1000 00:06:35.652 [2024-11-29 02:53:51.427470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.652 passed 00:06:35.652 Test: blockdev nvme passthru rw ...passed 00:06:35.652 Test: blockdev nvme passthru vendor specific ...passed 00:06:35.652 Test: blockdev nvme admin passthru ...passed 00:06:35.652 Test: blockdev copy ...passed 00:06:35.652 Suite: bdevio tests on: Nvme0n1 00:06:35.652 Test: blockdev write read block ...passed 00:06:35.652 Test: blockdev write zeroes read block ...passed 00:06:35.652 Test: blockdev write zeroes read no split ...passed 00:06:35.652 Test: blockdev write zeroes read split ...passed 00:06:35.652 Test: blockdev write zeroes read split partial ...passed 00:06:35.652 Test: blockdev reset ...[2024-11-29 02:53:51.454911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:35.652 [2024-11-29 02:53:51.457914] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:35.652 passed 00:06:35.652 Test: blockdev write read 8 blocks ...passed 00:06:35.652 Test: blockdev write read size > 128k ...passed 00:06:35.652 Test: blockdev write read invalid size ...passed 00:06:35.652 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.652 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.652 Test: blockdev write read max offset ...passed 00:06:35.652 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.652 Test: blockdev writev readv 8 blocks ...passed 00:06:35.652 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.652 Test: blockdev writev readv block ...passed 00:06:35.652 Test: blockdev writev readv size > 128k ...passed 00:06:35.652 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.652 Test: blockdev comparev and writev ...passed 00:06:35.652 Test: blockdev nvme passthru rw ...[2024-11-29 02:53:51.474089] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:35.652 separate metadata which is not supported yet. 00:06:35.652 passed 00:06:35.652 Test: blockdev nvme passthru vendor specific ...[2024-11-29 02:53:51.475613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:35.652 [2024-11-29 02:53:51.475673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:35.652 passed 00:06:35.653 Test: blockdev nvme admin passthru ...passed 00:06:35.653 Test: blockdev copy ...passed 00:06:35.653 00:06:35.653 Run Summary: Type Total Ran Passed Failed Inactive 00:06:35.653 suites 7 7 n/a 0 0 00:06:35.653 tests 161 161 161 0 0 00:06:35.653 asserts 1025 1025 1025 0 n/a 00:06:35.653 00:06:35.653 Elapsed time = 0.841 seconds 00:06:35.653 0 00:06:35.653 02:53:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72883 00:06:35.653 02:53:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72883 ']' 00:06:35.653 02:53:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72883 00:06:35.653 02:53:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:35.653 02:53:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.653 02:53:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72883 00:06:35.653 02:53:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.653 02:53:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.653 killing process with pid 72883 00:06:35.653 02:53:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72883' 00:06:35.653 02:53:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72883 00:06:35.653 02:53:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72883 00:06:35.913 02:53:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:35.913 00:06:35.913 real 0m1.632s 00:06:35.913 user 0m4.150s 00:06:35.913 sys 0m0.280s 00:06:35.913 02:53:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.913 02:53:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:35.913 ************************************ 00:06:35.913 END TEST bdev_bounds 00:06:35.913 ************************************ 00:06:35.913 02:53:51 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:35.913 02:53:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:35.913 02:53:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.913 02:53:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.913 ************************************ 00:06:35.913 START TEST bdev_nbd 00:06:35.913 ************************************ 00:06:35.913 02:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:35.913 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:35.913 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:35.913 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.913 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:35.913 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.913 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:35.913 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:35.913 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72931 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72931 /var/tmp/spdk-nbd.sock 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72931 ']' 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.914 02:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:35.914 [2024-11-29 02:53:51.848537] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:35.914 [2024-11-29 02:53:51.848653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.174 [2024-11-29 02:53:51.998154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.174 [2024-11-29 02:53:52.017098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.743 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.743 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:36.743 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:36.743 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.743 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:36.743 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:36.743 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:36.743 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.743 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:36.743 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:36.743 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:36.743 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:36.743 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:36.743 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:36.743 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.000 1+0 records in 00:06:37.000 1+0 records out 00:06:37.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117517 s, 3.5 MB/s 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.000 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:37.001 02:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.259 1+0 records in 00:06:37.259 1+0 records out 00:06:37.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584427 s, 7.0 MB/s 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:37.259 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.520 1+0 records in 00:06:37.520 1+0 records out 00:06:37.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504741 s, 8.1 MB/s 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:37.520 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:37.780 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:37.780 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:37.780 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:37.780 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:37.780 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.780 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.780 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.780 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:37.780 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.780 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.780 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.780 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.780 1+0 records in 00:06:37.780 1+0 records out 00:06:37.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000947163 s, 4.3 MB/s 00:06:37.780 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.780 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.780 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.781 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.781 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.781 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.781 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:37.781 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:38.041 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:38.041 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:38.041 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:38.041 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:38.041 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.041 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.041 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.041 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:38.041 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.041 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.042 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.042 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.042 1+0 records in 00:06:38.042 1+0 records out 00:06:38.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000959424 s, 4.3 MB/s 00:06:38.042 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.042 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.042 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.042 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.042 02:53:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.042 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:38.042 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:38.042 02:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.303 1+0 records in 00:06:38.303 1+0 records out 00:06:38.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103101 s, 4.0 MB/s 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:38.303 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:38.564 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:38.564 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:38.564 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:38.564 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:38.564 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.564 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.564 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.564 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:38.564 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.564 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.565 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.565 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.565 1+0 records in 00:06:38.565 1+0 records out 00:06:38.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00143427 s, 2.9 MB/s 00:06:38.565 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.565 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.565 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.565 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.565 02:53:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.565 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:38.565 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:38.565 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.825 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:38.825 { 00:06:38.825 "nbd_device": "/dev/nbd0", 00:06:38.825 "bdev_name": "Nvme0n1" 00:06:38.825 }, 00:06:38.825 { 00:06:38.825 "nbd_device": "/dev/nbd1", 00:06:38.825 "bdev_name": "Nvme1n1p1" 00:06:38.825 }, 00:06:38.825 { 00:06:38.825 "nbd_device": "/dev/nbd2", 00:06:38.825 "bdev_name": "Nvme1n1p2" 00:06:38.825 }, 00:06:38.825 { 00:06:38.825 "nbd_device": "/dev/nbd3", 00:06:38.825 "bdev_name": "Nvme2n1" 00:06:38.825 }, 00:06:38.825 { 00:06:38.841 "nbd_device": "/dev/nbd4", 00:06:38.841 "bdev_name": "Nvme2n2" 00:06:38.841 }, 00:06:38.841 { 00:06:38.841 "nbd_device": "/dev/nbd5", 00:06:38.841 "bdev_name": "Nvme2n3" 00:06:38.841 }, 00:06:38.841 { 00:06:38.841 "nbd_device": "/dev/nbd6", 00:06:38.841 "bdev_name": "Nvme3n1" 00:06:38.841 } 00:06:38.841 ]' 00:06:38.841 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:38.841 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:38.841 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:38.841 { 00:06:38.841 "nbd_device": "/dev/nbd0", 00:06:38.841 "bdev_name": "Nvme0n1" 00:06:38.841 }, 00:06:38.841 { 00:06:38.841 "nbd_device": "/dev/nbd1", 00:06:38.841 "bdev_name": "Nvme1n1p1" 00:06:38.841 }, 00:06:38.841 { 00:06:38.841 "nbd_device": "/dev/nbd2", 00:06:38.841 "bdev_name": "Nvme1n1p2" 00:06:38.841 }, 00:06:38.841 { 00:06:38.841 "nbd_device": "/dev/nbd3", 00:06:38.841 "bdev_name": "Nvme2n1" 00:06:38.841 }, 00:06:38.841 { 00:06:38.841 "nbd_device": "/dev/nbd4", 00:06:38.841 "bdev_name": "Nvme2n2" 00:06:38.841 }, 00:06:38.841 { 00:06:38.841 "nbd_device": "/dev/nbd5", 00:06:38.841 "bdev_name": "Nvme2n3" 00:06:38.841 }, 00:06:38.841 { 00:06:38.841 "nbd_device": "/dev/nbd6", 00:06:38.841 "bdev_name": "Nvme3n1" 00:06:38.841 } 00:06:38.841 ]' 00:06:38.841 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:38.841 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.841 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:38.841 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.841 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:38.841 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.841 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.103 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.103 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.103 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.103 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.103 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.103 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.103 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.103 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.103 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.103 02:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.103 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.103 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.103 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.103 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.103 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.103 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.103 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.103 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.103 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.103 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:39.360 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:39.360 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:39.360 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:39.360 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.360 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.360 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:39.360 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.360 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.360 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.360 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:39.617 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:39.618 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:39.618 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:39.618 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.618 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.618 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:39.618 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.618 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.618 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.618 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:39.875 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:39.875 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:39.875 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:39.875 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.875 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.875 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:39.875 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.875 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.875 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.875 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:40.133 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:40.133 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:40.133 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:40.133 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.133 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.133 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:40.133 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.133 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.133 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.133 02:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:40.389 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:40.389 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:40.389 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:40.389 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.389 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.389 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:40.389 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.389 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.390 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.390 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.390 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:40.647 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:40.908 /dev/nbd0 00:06:40.908 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.908 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.908 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:40.908 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:40.908 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.908 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.908 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:40.908 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:40.908 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.908 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.909 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.909 1+0 records in 00:06:40.909 1+0 records out 00:06:40.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381451 s, 10.7 MB/s 00:06:40.909 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.909 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:40.909 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.909 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.909 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:40.909 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.909 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:40.909 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:40.909 /dev/nbd1 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.174 1+0 records in 00:06:41.174 1+0 records out 00:06:41.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327153 s, 12.5 MB/s 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:41.174 02:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:41.174 /dev/nbd10 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.174 1+0 records in 00:06:41.174 1+0 records out 00:06:41.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390045 s, 10.5 MB/s 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:41.174 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:41.437 /dev/nbd11 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.437 1+0 records in 00:06:41.437 1+0 records out 00:06:41.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106018 s, 3.9 MB/s 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:41.437 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:41.700 /dev/nbd12 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.700 1+0 records in 00:06:41.700 1+0 records out 00:06:41.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00081044 s, 5.1 MB/s 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:41.700 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:41.963 /dev/nbd13 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.963 1+0 records in 00:06:41.963 1+0 records out 00:06:41.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071487 s, 5.7 MB/s 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:41.963 02:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:42.224 /dev/nbd14 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.224 1+0 records in 00:06:42.224 1+0 records out 00:06:42.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.001105 s, 3.7 MB/s 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.224 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.485 { 00:06:42.485 "nbd_device": "/dev/nbd0", 00:06:42.485 "bdev_name": "Nvme0n1" 00:06:42.485 }, 00:06:42.485 { 00:06:42.485 "nbd_device": "/dev/nbd1", 00:06:42.485 "bdev_name": "Nvme1n1p1" 00:06:42.485 }, 00:06:42.485 { 00:06:42.485 "nbd_device": "/dev/nbd10", 00:06:42.485 "bdev_name": "Nvme1n1p2" 00:06:42.485 }, 00:06:42.485 { 00:06:42.485 "nbd_device": "/dev/nbd11", 00:06:42.485 "bdev_name": "Nvme2n1" 00:06:42.485 }, 00:06:42.485 { 00:06:42.485 "nbd_device": "/dev/nbd12", 00:06:42.485 "bdev_name": "Nvme2n2" 00:06:42.485 }, 00:06:42.485 { 00:06:42.485 "nbd_device": "/dev/nbd13", 00:06:42.485 "bdev_name": "Nvme2n3" 00:06:42.485 }, 00:06:42.485 { 00:06:42.485 "nbd_device": "/dev/nbd14", 00:06:42.485 "bdev_name": "Nvme3n1" 00:06:42.485 } 00:06:42.485 ]' 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.485 { 00:06:42.485 "nbd_device": "/dev/nbd0", 00:06:42.485 "bdev_name": "Nvme0n1" 00:06:42.485 }, 00:06:42.485 { 00:06:42.485 "nbd_device": "/dev/nbd1", 00:06:42.485 "bdev_name": "Nvme1n1p1" 00:06:42.485 }, 00:06:42.485 { 00:06:42.485 "nbd_device": "/dev/nbd10", 00:06:42.485 "bdev_name": "Nvme1n1p2" 00:06:42.485 }, 00:06:42.485 { 00:06:42.485 "nbd_device": "/dev/nbd11", 00:06:42.485 "bdev_name": "Nvme2n1" 00:06:42.485 }, 00:06:42.485 { 00:06:42.485 "nbd_device": "/dev/nbd12", 00:06:42.485 "bdev_name": "Nvme2n2" 00:06:42.485 }, 00:06:42.485 { 00:06:42.485 "nbd_device": "/dev/nbd13", 00:06:42.485 "bdev_name": "Nvme2n3" 00:06:42.485 }, 00:06:42.485 { 00:06:42.485 "nbd_device": "/dev/nbd14", 00:06:42.485 "bdev_name": "Nvme3n1" 00:06:42.485 } 00:06:42.485 ]' 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:42.485 /dev/nbd1 00:06:42.485 /dev/nbd10 00:06:42.485 /dev/nbd11 00:06:42.485 /dev/nbd12 00:06:42.485 /dev/nbd13 00:06:42.485 /dev/nbd14' 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:42.485 /dev/nbd1 00:06:42.485 /dev/nbd10 00:06:42.485 /dev/nbd11 00:06:42.485 /dev/nbd12 00:06:42.485 /dev/nbd13 00:06:42.485 /dev/nbd14' 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:42.485 256+0 records in 00:06:42.485 256+0 records out 00:06:42.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00602305 s, 174 MB/s 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.485 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.746 256+0 records in 00:06:42.746 256+0 records out 00:06:42.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239312 s, 4.4 MB/s 00:06:42.746 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.746 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:43.007 256+0 records in 00:06:43.007 256+0 records out 00:06:43.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.19858 s, 5.3 MB/s 00:06:43.007 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.007 02:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:43.269 256+0 records in 00:06:43.269 256+0 records out 00:06:43.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186067 s, 5.6 MB/s 00:06:43.269 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.269 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:43.269 256+0 records in 00:06:43.269 256+0 records out 00:06:43.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158443 s, 6.6 MB/s 00:06:43.269 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.269 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:43.529 256+0 records in 00:06:43.529 256+0 records out 00:06:43.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164093 s, 6.4 MB/s 00:06:43.529 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.529 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:43.790 256+0 records in 00:06:43.790 256+0 records out 00:06:43.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.234033 s, 4.5 MB/s 00:06:43.790 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.790 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:44.051 256+0 records in 00:06:44.051 256+0 records out 00:06:44.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224543 s, 4.7 MB/s 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.051 02:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.311 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.311 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.311 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.311 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.311 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.311 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.311 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.311 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.311 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.311 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.572 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.572 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.572 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.572 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.572 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.572 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.572 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.572 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.572 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.572 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:44.833 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:44.833 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:44.833 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:44.833 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.833 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.833 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:44.833 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.833 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.833 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.833 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:45.094 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:45.094 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:45.094 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:45.094 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.094 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.094 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:45.094 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.094 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.094 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.094 02:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:45.094 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:45.094 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:45.094 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:45.094 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.094 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.094 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:45.094 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.094 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.094 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.094 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:45.355 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:45.355 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:45.355 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:45.355 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.355 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.355 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:45.355 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.355 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.355 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.355 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:45.614 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:45.614 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:45.614 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:45.614 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.614 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.614 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:45.614 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.614 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.614 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.614 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.614 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:45.874 02:54:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:46.135 malloc_lvol_verify 00:06:46.135 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:46.396 f66569dd-bea9-4115-a608-c97afe06c219 00:06:46.396 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:46.656 c4caf30f-027a-40bf-a580-184f8e69e633 00:06:46.656 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:46.914 /dev/nbd0 00:06:46.915 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:46.915 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:46.915 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:46.915 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:46.915 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:46.915 mke2fs 1.47.0 (5-Feb-2023) 00:06:46.915 Discarding device blocks: 0/4096 done 00:06:46.915 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:46.915 00:06:46.915 Allocating group tables: 0/1 done 00:06:46.915 Writing inode tables: 0/1 done 00:06:46.915 Creating journal (1024 blocks): done 00:06:46.915 Writing superblocks and filesystem accounting information: 0/1 done 00:06:46.915 00:06:46.915 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:46.915 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.915 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:46.915 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.915 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:46.915 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.915 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72931 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72931 ']' 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72931 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72931 00:06:47.175 killing process with pid 72931 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72931' 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72931 00:06:47.175 02:54:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72931 00:06:47.175 ************************************ 00:06:47.175 END TEST bdev_nbd 00:06:47.175 ************************************ 00:06:47.175 02:54:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:47.175 00:06:47.175 real 0m11.331s 00:06:47.175 user 0m15.784s 00:06:47.175 sys 0m4.002s 00:06:47.175 02:54:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.175 02:54:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:47.175 02:54:03 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:47.175 02:54:03 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:06:47.175 skipping fio tests on NVMe due to multi-ns failures. 00:06:47.175 02:54:03 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:06:47.175 02:54:03 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:47.175 02:54:03 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:47.175 02:54:03 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:47.175 02:54:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:47.175 02:54:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.175 02:54:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.436 ************************************ 00:06:47.436 START TEST bdev_verify 00:06:47.436 ************************************ 00:06:47.436 02:54:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:47.436 [2024-11-29 02:54:03.234952] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:47.436 [2024-11-29 02:54:03.235069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73350 ] 00:06:47.436 [2024-11-29 02:54:03.379003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.436 [2024-11-29 02:54:03.400257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.436 [2024-11-29 02:54:03.400321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.008 Running I/O for 5 seconds... 00:06:50.388 19456.00 IOPS, 76.00 MiB/s [2024-11-29T02:54:07.321Z] 19776.00 IOPS, 77.25 MiB/s [2024-11-29T02:54:08.263Z] 19562.67 IOPS, 76.42 MiB/s [2024-11-29T02:54:09.205Z] 19920.00 IOPS, 77.81 MiB/s [2024-11-29T02:54:09.205Z] 19878.40 IOPS, 77.65 MiB/s 00:06:53.213 Latency(us) 00:06:53.213 [2024-11-29T02:54:09.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:53.213 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:53.213 Verification LBA range: start 0x0 length 0xbd0bd 00:06:53.213 Nvme0n1 : 5.03 1398.95 5.46 0.00 0.00 91191.65 19862.45 81466.29 00:06:53.213 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:53.213 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:53.213 Nvme0n1 : 5.06 1391.60 5.44 0.00 0.00 91685.08 20971.52 88322.36 00:06:53.213 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:53.213 Verification LBA range: start 0x0 length 0x4ff80 00:06:53.213 Nvme1n1p1 : 5.06 1402.85 5.48 0.00 0.00 90711.38 10032.05 77030.01 00:06:53.213 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:53.213 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:53.213 Nvme1n1p1 : 5.06 1391.17 5.43 0.00 0.00 91396.15 23290.49 75416.81 00:06:53.213 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:53.213 Verification LBA range: start 0x0 length 0x4ff7f 00:06:53.213 Nvme1n1p2 : 5.07 1402.44 5.48 0.00 0.00 90600.62 10233.70 74206.92 00:06:53.213 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:53.213 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:53.213 Nvme1n1p2 : 5.06 1390.77 5.43 0.00 0.00 91220.97 22887.19 73803.62 00:06:53.213 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:53.213 Verification LBA range: start 0x0 length 0x80000 00:06:53.213 Nvme2n1 : 5.08 1410.62 5.51 0.00 0.00 90147.45 12502.25 73400.32 00:06:53.213 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:53.213 Verification LBA range: start 0x80000 length 0x80000 00:06:53.213 Nvme2n1 : 5.08 1398.55 5.46 0.00 0.00 90542.95 6225.92 72190.42 00:06:53.213 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:53.213 Verification LBA range: start 0x0 length 0x80000 00:06:53.213 Nvme2n2 : 5.08 1410.05 5.51 0.00 0.00 90003.19 13510.50 76223.41 00:06:53.213 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:53.213 Verification LBA range: start 0x80000 length 0x80000 00:06:53.213 Nvme2n2 : 5.08 1398.19 5.46 0.00 0.00 90386.07 6604.01 72593.72 00:06:53.213 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:53.213 Verification LBA range: start 0x0 length 0x80000 00:06:53.213 Nvme2n3 : 5.08 1409.68 5.51 0.00 0.00 89849.19 13812.97 80256.39 00:06:53.213 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:53.213 Verification LBA range: start 0x80000 length 0x80000 00:06:53.213 Nvme2n3 : 5.09 1407.97 5.50 0.00 0.00 89745.82 7158.55 76223.41 00:06:53.213 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:53.213 Verification LBA range: start 0x0 length 0x20000 00:06:53.213 Nvme3n1 : 5.09 1409.32 5.51 0.00 0.00 89705.34 12653.49 78643.20 00:06:53.213 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:53.213 Verification LBA range: start 0x20000 length 0x20000 00:06:53.213 Nvme3n1 : 5.09 1407.60 5.50 0.00 0.00 89678.45 7511.43 77836.60 00:06:53.213 [2024-11-29T02:54:09.205Z] =================================================================================================================== 00:06:53.213 [2024-11-29T02:54:09.205Z] Total : 19629.77 76.68 0.00 0.00 90485.59 6225.92 88322.36 00:06:53.786 00:06:53.786 real 0m6.322s 00:06:53.786 user 0m11.952s 00:06:53.786 sys 0m0.206s 00:06:53.786 02:54:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.786 ************************************ 00:06:53.786 END TEST bdev_verify 00:06:53.786 ************************************ 00:06:53.786 02:54:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:53.786 02:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:53.786 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:53.786 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.786 02:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.786 ************************************ 00:06:53.786 START TEST bdev_verify_big_io 00:06:53.786 ************************************ 00:06:53.786 02:54:09 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:53.786 [2024-11-29 02:54:09.620402] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:53.786 [2024-11-29 02:54:09.620515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73443 ] 00:06:53.786 [2024-11-29 02:54:09.766601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.047 [2024-11-29 02:54:09.786905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.047 [2024-11-29 02:54:09.786922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.308 Running I/O for 5 seconds... 00:07:00.589 1754.00 IOPS, 109.62 MiB/s [2024-11-29T02:54:16.581Z] 3490.50 IOPS, 218.16 MiB/s 00:07:00.589 Latency(us) 00:07:00.589 [2024-11-29T02:54:16.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.589 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:00.589 Verification LBA range: start 0x0 length 0xbd0b 00:07:00.589 Nvme0n1 : 6.09 79.76 4.98 0.00 0.00 1546871.27 14922.04 1484138.34 00:07:00.589 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:00.589 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:00.589 Nvme0n1 : 5.90 84.07 5.25 0.00 0.00 1433495.52 31255.63 1677721.60 00:07:00.589 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:00.589 Verification LBA range: start 0x0 length 0x4ff8 00:07:00.589 Nvme1n1p1 : 6.09 79.58 4.97 0.00 0.00 1496103.83 71787.13 1329271.73 00:07:00.589 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:00.589 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:00.589 Nvme1n1p1 : 5.98 88.27 5.52 0.00 0.00 1337339.99 116956.55 1716438.25 00:07:00.589 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:00.589 Verification LBA range: start 0x0 length 0x4ff7 00:07:00.589 Nvme1n1p2 : 6.10 71.68 4.48 0.00 0.00 1604413.57 70577.23 2852126.72 00:07:00.589 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:00.589 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:00.589 Nvme1n1p2 : 5.99 106.92 6.68 0.00 0.00 1083576.95 79853.10 1161499.57 00:07:00.589 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:00.589 Verification LBA range: start 0x0 length 0x8000 00:07:00.589 Nvme2n1 : 6.10 71.99 4.50 0.00 0.00 1545334.98 72997.02 2890843.37 00:07:00.589 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:00.589 Verification LBA range: start 0x8000 length 0x8000 00:07:00.589 Nvme2n1 : 5.99 106.86 6.68 0.00 0.00 1048572.53 80256.39 1187310.67 00:07:00.589 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:00.589 Verification LBA range: start 0x0 length 0x8000 00:07:00.589 Nvme2n2 : 6.10 75.21 4.70 0.00 0.00 1437751.99 73803.62 2929560.02 00:07:00.589 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:00.589 Verification LBA range: start 0x8000 length 0x8000 00:07:00.589 Nvme2n2 : 6.06 109.76 6.86 0.00 0.00 987965.70 67350.84 1219574.55 00:07:00.589 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:00.589 Verification LBA range: start 0x0 length 0x8000 00:07:00.589 Nvme2n3 : 6.19 85.82 5.36 0.00 0.00 1222681.41 22483.89 2994087.78 00:07:00.589 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:00.589 Verification LBA range: start 0x8000 length 0x8000 00:07:00.589 Nvme2n3 : 6.16 119.90 7.49 0.00 0.00 877512.85 20064.10 1264743.98 00:07:00.589 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:00.589 Verification LBA range: start 0x0 length 0x2000 00:07:00.589 Nvme3n1 : 6.21 105.30 6.58 0.00 0.00 965887.37 1115.37 3084426.63 00:07:00.589 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:00.589 Verification LBA range: start 0x2000 length 0x2000 00:07:00.589 Nvme3n1 : 6.17 134.82 8.43 0.00 0.00 761773.44 2318.97 1290555.08 00:07:00.589 [2024-11-29T02:54:16.581Z] =================================================================================================================== 00:07:00.589 [2024-11-29T02:54:16.581Z] Total : 1319.93 82.50 0.00 0.00 1185252.47 1115.37 3084426.63 00:07:01.162 00:07:01.162 real 0m7.552s 00:07:01.162 user 0m14.414s 00:07:01.162 sys 0m0.200s 00:07:01.162 02:54:17 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.162 ************************************ 00:07:01.162 END TEST bdev_verify_big_io 00:07:01.162 ************************************ 00:07:01.162 02:54:17 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:01.423 02:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:01.423 02:54:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:01.423 02:54:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.423 02:54:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:01.423 ************************************ 00:07:01.423 START TEST bdev_write_zeroes 00:07:01.423 ************************************ 00:07:01.423 02:54:17 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:01.423 [2024-11-29 02:54:17.232023] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:01.423 [2024-11-29 02:54:17.232148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73541 ] 00:07:01.423 [2024-11-29 02:54:17.377100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.423 [2024-11-29 02:54:17.398210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.994 Running I/O for 1 seconds... 00:07:02.933 55936.00 IOPS, 218.50 MiB/s 00:07:02.933 Latency(us) 00:07:02.933 [2024-11-29T02:54:18.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.933 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:02.933 Nvme0n1 : 1.03 7983.10 31.18 0.00 0.00 15994.93 10989.88 26416.05 00:07:02.933 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:02.933 Nvme1n1p1 : 1.03 7973.28 31.15 0.00 0.00 15987.35 11846.89 25710.28 00:07:02.933 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:02.933 Nvme1n1p2 : 1.03 7963.53 31.11 0.00 0.00 15929.32 8620.50 24903.68 00:07:02.933 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:02.933 Nvme2n1 : 1.03 7954.59 31.07 0.00 0.00 15923.65 8519.68 24197.91 00:07:02.933 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:02.933 Nvme2n2 : 1.03 7945.68 31.04 0.00 0.00 15921.03 8469.27 23492.14 00:07:02.933 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:02.933 Nvme2n3 : 1.03 7936.69 31.00 0.00 0.00 15911.20 7914.73 24903.68 00:07:02.933 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:02.933 Nvme3n1 : 1.03 7865.85 30.73 0.00 0.00 16023.81 10435.35 26416.05 00:07:02.933 [2024-11-29T02:54:18.925Z] =================================================================================================================== 00:07:02.933 [2024-11-29T02:54:18.925Z] Total : 55622.72 217.28 0.00 0.00 15955.82 7914.73 26416.05 00:07:03.197 00:07:03.197 real 0m1.840s 00:07:03.197 user 0m1.560s 00:07:03.197 sys 0m0.167s 00:07:03.197 02:54:19 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.197 ************************************ 00:07:03.197 END TEST bdev_write_zeroes 00:07:03.197 ************************************ 00:07:03.197 02:54:19 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:03.197 02:54:19 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:03.197 02:54:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:03.197 02:54:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.197 02:54:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.197 ************************************ 00:07:03.197 START TEST bdev_json_nonenclosed 00:07:03.197 ************************************ 00:07:03.197 02:54:19 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:03.197 [2024-11-29 02:54:19.135824] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:03.197 [2024-11-29 02:54:19.135937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73583 ] 00:07:03.459 [2024-11-29 02:54:19.282477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.459 [2024-11-29 02:54:19.301447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.459 [2024-11-29 02:54:19.301530] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:03.459 [2024-11-29 02:54:19.301544] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:03.459 [2024-11-29 02:54:19.301556] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.459 00:07:03.459 real 0m0.292s 00:07:03.459 user 0m0.110s 00:07:03.459 sys 0m0.079s 00:07:03.459 02:54:19 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.459 ************************************ 00:07:03.459 END TEST bdev_json_nonenclosed 00:07:03.459 ************************************ 00:07:03.459 02:54:19 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:03.459 02:54:19 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:03.459 02:54:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:03.459 02:54:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.459 02:54:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.459 ************************************ 00:07:03.459 START TEST bdev_json_nonarray 00:07:03.459 ************************************ 00:07:03.459 02:54:19 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:03.720 [2024-11-29 02:54:19.490624] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:03.720 [2024-11-29 02:54:19.490734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73603 ] 00:07:03.720 [2024-11-29 02:54:19.638804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.720 [2024-11-29 02:54:19.657569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.720 [2024-11-29 02:54:19.657651] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:03.720 [2024-11-29 02:54:19.657669] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:03.720 [2024-11-29 02:54:19.657681] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.981 00:07:03.981 real 0m0.289s 00:07:03.981 user 0m0.109s 00:07:03.981 sys 0m0.077s 00:07:03.981 02:54:19 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.981 02:54:19 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:03.981 ************************************ 00:07:03.981 END TEST bdev_json_nonarray 00:07:03.981 ************************************ 00:07:03.981 02:54:19 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:03.981 02:54:19 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:03.981 02:54:19 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:03.981 02:54:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.981 02:54:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.981 02:54:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.981 ************************************ 00:07:03.981 START TEST bdev_gpt_uuid 00:07:03.981 ************************************ 00:07:03.981 02:54:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:03.981 02:54:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:03.981 02:54:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:03.981 02:54:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73623 00:07:03.981 02:54:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:03.981 02:54:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 73623 00:07:03.981 02:54:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 73623 ']' 00:07:03.981 02:54:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.981 02:54:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.981 02:54:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.981 02:54:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.981 02:54:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:03.981 02:54:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:03.981 [2024-11-29 02:54:19.852623] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:03.981 [2024-11-29 02:54:19.852735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73623 ] 00:07:04.243 [2024-11-29 02:54:19.998912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.243 [2024-11-29 02:54:20.017770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.814 02:54:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.814 02:54:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:04.814 02:54:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:04.814 02:54:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.814 02:54:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:05.075 Some configs were skipped because the RPC state that can call them passed over. 00:07:05.075 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.075 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:05.075 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.075 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:05.075 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.075 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:05.075 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.075 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:05.075 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.075 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:05.075 { 00:07:05.075 "name": "Nvme1n1p1", 00:07:05.075 "aliases": [ 00:07:05.075 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:05.075 ], 00:07:05.075 "product_name": "GPT Disk", 00:07:05.075 "block_size": 4096, 00:07:05.075 "num_blocks": 655104, 00:07:05.075 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:05.075 "assigned_rate_limits": { 00:07:05.075 "rw_ios_per_sec": 0, 00:07:05.075 "rw_mbytes_per_sec": 0, 00:07:05.075 "r_mbytes_per_sec": 0, 00:07:05.075 "w_mbytes_per_sec": 0 00:07:05.075 }, 00:07:05.075 "claimed": false, 00:07:05.075 "zoned": false, 00:07:05.075 "supported_io_types": { 00:07:05.075 "read": true, 00:07:05.075 "write": true, 00:07:05.075 "unmap": true, 00:07:05.075 "flush": true, 00:07:05.075 "reset": true, 00:07:05.075 "nvme_admin": false, 00:07:05.075 "nvme_io": false, 00:07:05.075 "nvme_io_md": false, 00:07:05.075 "write_zeroes": true, 00:07:05.075 "zcopy": false, 00:07:05.075 "get_zone_info": false, 00:07:05.075 "zone_management": false, 00:07:05.075 "zone_append": false, 00:07:05.075 "compare": true, 00:07:05.075 "compare_and_write": false, 00:07:05.075 "abort": true, 00:07:05.075 "seek_hole": false, 00:07:05.075 "seek_data": false, 00:07:05.075 "copy": true, 00:07:05.075 "nvme_iov_md": false 00:07:05.075 }, 00:07:05.075 "driver_specific": { 00:07:05.075 "gpt": { 00:07:05.075 "base_bdev": "Nvme1n1", 00:07:05.075 "offset_blocks": 256, 00:07:05.075 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:05.075 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:05.075 "partition_name": "SPDK_TEST_first" 00:07:05.075 } 00:07:05.075 } 00:07:05.075 } 00:07:05.075 ]' 00:07:05.075 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:05.337 { 00:07:05.337 "name": "Nvme1n1p2", 00:07:05.337 "aliases": [ 00:07:05.337 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:05.337 ], 00:07:05.337 "product_name": "GPT Disk", 00:07:05.337 "block_size": 4096, 00:07:05.337 "num_blocks": 655103, 00:07:05.337 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:05.337 "assigned_rate_limits": { 00:07:05.337 "rw_ios_per_sec": 0, 00:07:05.337 "rw_mbytes_per_sec": 0, 00:07:05.337 "r_mbytes_per_sec": 0, 00:07:05.337 "w_mbytes_per_sec": 0 00:07:05.337 }, 00:07:05.337 "claimed": false, 00:07:05.337 "zoned": false, 00:07:05.337 "supported_io_types": { 00:07:05.337 "read": true, 00:07:05.337 "write": true, 00:07:05.337 "unmap": true, 00:07:05.337 "flush": true, 00:07:05.337 "reset": true, 00:07:05.337 "nvme_admin": false, 00:07:05.337 "nvme_io": false, 00:07:05.337 "nvme_io_md": false, 00:07:05.337 "write_zeroes": true, 00:07:05.337 "zcopy": false, 00:07:05.337 "get_zone_info": false, 00:07:05.337 "zone_management": false, 00:07:05.337 "zone_append": false, 00:07:05.337 "compare": true, 00:07:05.337 "compare_and_write": false, 00:07:05.337 "abort": true, 00:07:05.337 "seek_hole": false, 00:07:05.337 "seek_data": false, 00:07:05.337 "copy": true, 00:07:05.337 "nvme_iov_md": false 00:07:05.337 }, 00:07:05.337 "driver_specific": { 00:07:05.337 "gpt": { 00:07:05.337 "base_bdev": "Nvme1n1", 00:07:05.337 "offset_blocks": 655360, 00:07:05.337 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:05.337 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:05.337 "partition_name": "SPDK_TEST_second" 00:07:05.337 } 00:07:05.337 } 00:07:05.337 } 00:07:05.337 ]' 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 73623 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 73623 ']' 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 73623 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73623 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.337 killing process with pid 73623 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73623' 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 73623 00:07:05.337 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 73623 00:07:05.599 00:07:05.599 real 0m1.751s 00:07:05.599 user 0m1.942s 00:07:05.599 sys 0m0.309s 00:07:05.599 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.599 ************************************ 00:07:05.599 END TEST bdev_gpt_uuid 00:07:05.599 ************************************ 00:07:05.599 02:54:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:05.599 02:54:21 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:05.599 02:54:21 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:05.599 02:54:21 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:05.599 02:54:21 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:05.599 02:54:21 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:05.599 02:54:21 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:05.599 02:54:21 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:05.599 02:54:21 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:05.860 02:54:21 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:06.120 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:06.120 Waiting for block devices as requested 00:07:06.120 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:06.380 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:06.380 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:06.380 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:11.664 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:11.664 02:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:11.664 02:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:11.923 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:11.923 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:11.923 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:11.923 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:11.923 02:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:11.923 00:07:11.923 real 0m49.117s 00:07:11.923 user 1m2.258s 00:07:11.923 sys 0m7.857s 00:07:11.923 02:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.923 02:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.923 ************************************ 00:07:11.923 END TEST blockdev_nvme_gpt 00:07:11.923 ************************************ 00:07:11.923 02:54:27 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:11.923 02:54:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.923 02:54:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.923 02:54:27 -- common/autotest_common.sh@10 -- # set +x 00:07:11.923 ************************************ 00:07:11.923 START TEST nvme 00:07:11.923 ************************************ 00:07:11.923 02:54:27 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:11.923 * Looking for test storage... 00:07:11.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:11.923 02:54:27 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.923 02:54:27 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.923 02:54:27 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.923 02:54:27 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.923 02:54:27 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.923 02:54:27 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.923 02:54:27 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.923 02:54:27 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.923 02:54:27 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.923 02:54:27 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.923 02:54:27 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.923 02:54:27 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.923 02:54:27 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.923 02:54:27 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.923 02:54:27 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.923 02:54:27 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:11.923 02:54:27 nvme -- scripts/common.sh@345 -- # : 1 00:07:11.923 02:54:27 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.923 02:54:27 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.923 02:54:27 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:11.923 02:54:27 nvme -- scripts/common.sh@353 -- # local d=1 00:07:11.923 02:54:27 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.923 02:54:27 nvme -- scripts/common.sh@355 -- # echo 1 00:07:11.923 02:54:27 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.924 02:54:27 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:11.924 02:54:27 nvme -- scripts/common.sh@353 -- # local d=2 00:07:11.924 02:54:27 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.924 02:54:27 nvme -- scripts/common.sh@355 -- # echo 2 00:07:11.924 02:54:27 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.924 02:54:27 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.924 02:54:27 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.924 02:54:27 nvme -- scripts/common.sh@368 -- # return 0 00:07:11.924 02:54:27 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.924 02:54:27 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.924 --rc genhtml_branch_coverage=1 00:07:11.924 --rc genhtml_function_coverage=1 00:07:11.924 --rc genhtml_legend=1 00:07:11.924 --rc geninfo_all_blocks=1 00:07:11.924 --rc geninfo_unexecuted_blocks=1 00:07:11.924 00:07:11.924 ' 00:07:11.924 02:54:27 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.924 --rc genhtml_branch_coverage=1 00:07:11.924 --rc genhtml_function_coverage=1 00:07:11.924 --rc genhtml_legend=1 00:07:11.924 --rc geninfo_all_blocks=1 00:07:11.924 --rc geninfo_unexecuted_blocks=1 00:07:11.924 00:07:11.924 ' 00:07:11.924 02:54:27 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.924 --rc genhtml_branch_coverage=1 00:07:11.924 --rc genhtml_function_coverage=1 00:07:11.924 --rc genhtml_legend=1 00:07:11.924 --rc geninfo_all_blocks=1 00:07:11.924 --rc geninfo_unexecuted_blocks=1 00:07:11.924 00:07:11.924 ' 00:07:11.924 02:54:27 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.924 --rc genhtml_branch_coverage=1 00:07:11.924 --rc genhtml_function_coverage=1 00:07:11.924 --rc genhtml_legend=1 00:07:11.924 --rc geninfo_all_blocks=1 00:07:11.924 --rc geninfo_unexecuted_blocks=1 00:07:11.924 00:07:11.924 ' 00:07:11.924 02:54:27 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:12.489 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:12.746 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:12.746 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:12.746 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.004 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.004 02:54:28 nvme -- nvme/nvme.sh@79 -- # uname 00:07:13.004 02:54:28 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:13.004 02:54:28 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:13.004 02:54:28 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:13.004 02:54:28 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:13.004 02:54:28 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:13.004 02:54:28 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:13.004 02:54:28 nvme -- common/autotest_common.sh@1075 -- # stubpid=74248 00:07:13.004 02:54:28 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:13.004 Waiting for stub to ready for secondary processes... 00:07:13.004 02:54:28 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:13.004 02:54:28 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:13.004 02:54:28 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74248 ]] 00:07:13.004 02:54:28 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:13.004 [2024-11-29 02:54:28.823956] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:13.004 [2024-11-29 02:54:28.824073] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:13.569 [2024-11-29 02:54:29.548506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.826 [2024-11-29 02:54:29.560750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.827 [2024-11-29 02:54:29.560961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.827 [2024-11-29 02:54:29.560982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.827 [2024-11-29 02:54:29.570662] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:13.827 [2024-11-29 02:54:29.570692] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:13.827 [2024-11-29 02:54:29.579400] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:13.827 [2024-11-29 02:54:29.579578] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:13.827 [2024-11-29 02:54:29.580054] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:13.827 [2024-11-29 02:54:29.580175] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:13.827 [2024-11-29 02:54:29.580233] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:13.827 [2024-11-29 02:54:29.580664] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:13.827 [2024-11-29 02:54:29.580799] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:13.827 [2024-11-29 02:54:29.580848] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:13.827 [2024-11-29 02:54:29.581581] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:13.827 [2024-11-29 02:54:29.581690] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:13.827 [2024-11-29 02:54:29.581731] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:13.827 [2024-11-29 02:54:29.581763] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:13.827 [2024-11-29 02:54:29.581794] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:13.827 done. 00:07:13.827 02:54:29 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:13.827 02:54:29 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:13.827 02:54:29 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:13.827 02:54:29 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:13.827 02:54:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.827 02:54:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:13.827 ************************************ 00:07:13.827 START TEST nvme_reset 00:07:13.827 ************************************ 00:07:13.827 02:54:29 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:14.084 Initializing NVMe Controllers 00:07:14.084 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:14.084 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:14.084 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:14.084 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:14.084 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:14.084 00:07:14.084 real 0m0.185s 00:07:14.084 user 0m0.063s 00:07:14.084 sys 0m0.074s 00:07:14.084 02:54:29 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.084 ************************************ 00:07:14.084 END TEST nvme_reset 00:07:14.084 ************************************ 00:07:14.084 02:54:29 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:14.084 02:54:30 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:14.084 02:54:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.084 02:54:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.084 02:54:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:14.084 ************************************ 00:07:14.084 START TEST nvme_identify 00:07:14.084 ************************************ 00:07:14.084 02:54:30 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:14.084 02:54:30 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:14.084 02:54:30 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:14.084 02:54:30 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:14.084 02:54:30 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:14.084 02:54:30 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:14.084 02:54:30 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:14.084 02:54:30 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:14.084 02:54:30 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:14.084 02:54:30 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:14.343 02:54:30 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:14.343 02:54:30 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:14.343 02:54:30 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:14.343 ===================================================== 00:07:14.343 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:14.343 ===================================================== 00:07:14.343 Controller Capabilities/Features 00:07:14.343 ================================ 00:07:14.343 Vendor ID: 1b36 00:07:14.343 Subsystem Vendor ID: 1af4 00:07:14.343 Serial Number: 12340 00:07:14.343 Model Number: QEMU NVMe Ctrl 00:07:14.343 Firmware Version: 8.0.0 00:07:14.343 Recommended Arb Burst: 6 00:07:14.343 IEEE OUI Identifier: 00 54 52 00:07:14.343 Multi-path I/O 00:07:14.343 May have multiple subsystem ports: No 00:07:14.343 May have multiple controllers: No 00:07:14.343 Associated with SR-IOV VF: No 00:07:14.343 Max Data Transfer Size: 524288 00:07:14.343 Max Number of Namespaces: 256 00:07:14.343 Max Number of I/O Queues: 64 00:07:14.343 NVMe Specification Version (VS): 1.4 00:07:14.344 NVMe Specification Version (Identify): 1.4 00:07:14.344 Maximum Queue Entries: 2048 00:07:14.344 Contiguous Queues Required: Yes 00:07:14.344 Arbitration Mechanisms Supported 00:07:14.344 Weighted Round Robin: Not Supported 00:07:14.344 Vendor Specific: Not Supported 00:07:14.344 Reset Timeout: 7500 ms 00:07:14.344 Doorbell Stride: 4 bytes 00:07:14.344 NVM Subsystem Reset: Not Supported 00:07:14.344 Command Sets Supported 00:07:14.344 NVM Command Set: Supported 00:07:14.344 Boot Partition: Not Supported 00:07:14.344 Memory Page Size Minimum: 4096 bytes 00:07:14.344 Memory Page Size Maximum: 65536 bytes 00:07:14.344 Persistent Memory Region: Not Supported 00:07:14.344 Optional Asynchronous Events Supported 00:07:14.344 Namespace Attribute Notices: Supported 00:07:14.344 Firmware Activation Notices: Not Supported 00:07:14.344 ANA Change Notices: Not Supported 00:07:14.344 PLE Aggregate Log Change Notices: Not Supported 00:07:14.344 LBA Status Info Alert Notices: Not Supported 00:07:14.344 EGE Aggregate Log Change Notices: Not Supported 00:07:14.344 Normal NVM Subsystem Shutdown event: Not Supported 00:07:14.344 Zone Descriptor Change Notices: Not Supported 00:07:14.344 Discovery Log Change Notices: Not Supported 00:07:14.344 Controller Attributes 00:07:14.344 128-bit Host Identifier: Not Supported 00:07:14.344 Non-Operational Permissive Mode: Not Supported 00:07:14.344 NVM Sets: Not Supported 00:07:14.344 Read Recovery Levels: Not Supported 00:07:14.344 Endurance Groups: Not Supported 00:07:14.344 Predictable Latency Mode: Not Supported 00:07:14.344 Traffic Based Keep ALive: Not Supported 00:07:14.344 Namespace Granularity: Not Supported 00:07:14.344 SQ Associations: Not Supported 00:07:14.344 UUID List: Not Supported 00:07:14.344 Multi-Domain Subsystem: Not Supported 00:07:14.344 Fixed Capacity Management: Not Supported 00:07:14.344 Variable Capacity Management: Not Supported 00:07:14.344 Delete Endurance Group: Not Supported 00:07:14.344 Delete NVM Set: Not Supported 00:07:14.344 Extended LBA Formats Supported: Supported 00:07:14.344 Flexible Data Placement Supported: Not Supported 00:07:14.344 00:07:14.344 Controller Memory Buffer Support 00:07:14.344 ================================ 00:07:14.344 Supported: No 00:07:14.344 00:07:14.344 Persistent Memory Region Support 00:07:14.344 ================================ 00:07:14.344 Supported: No 00:07:14.344 00:07:14.344 Admin Command Set Attributes 00:07:14.344 ============================ 00:07:14.344 Security Send/Receive: Not Supported 00:07:14.344 Format NVM: Supported 00:07:14.344 Firmware Activate/Download: Not Supported 00:07:14.344 Namespace Management: Supported 00:07:14.344 Device Self-Test: Not Supported 00:07:14.344 Directives: Supported 00:07:14.344 NVMe-MI: Not Supported 00:07:14.344 Virtualization Management: Not Supported 00:07:14.344 Doorbell Buffer Config: Supported 00:07:14.344 Get LBA Status Capability: Not Supported 00:07:14.344 Command & Feature Lockdown Capability: Not Supported 00:07:14.344 Abort Command Limit: 4 00:07:14.344 Async Event Request Limit: 4 00:07:14.344 Number of Firmware Slots: N/A 00:07:14.344 Firmware Slot 1 Read-Only: N/A 00:07:14.344 Firmware Activation Without Reset: N/A 00:07:14.344 Multiple Update Detection Support: N/A 00:07:14.344 Firmware Update Granularity: No Information Provided 00:07:14.344 Per-Namespace SMART Log: Yes 00:07:14.344 Asymmetric Namespace Access Log Page: Not Supported 00:07:14.344 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:14.344 Command Effects Log Page: Supported 00:07:14.344 Get Log Page Extended Data: Supported 00:07:14.344 Telemetry Log Pages: Not Supported 00:07:14.344 Persistent Event Log Pages: Not Supported 00:07:14.344 Supported Log Pages Log Page: May Support 00:07:14.344 Commands Supported & Effects Log Page: Not Supported 00:07:14.344 Feature Identifiers & Effects Log Page:May Support 00:07:14.344 NVMe-MI Commands & Effects Log Page: May Support 00:07:14.344 Data Area 4 for Telemetry Log: Not Supported 00:07:14.344 Error Log Page Entries Supported: 1 00:07:14.344 Keep Alive: Not Supported 00:07:14.344 00:07:14.344 NVM Command Set Attributes 00:07:14.344 ========================== 00:07:14.344 Submission Queue Entry Size 00:07:14.344 Max: 64 00:07:14.344 Min: 64 00:07:14.344 Completion Queue Entry Size 00:07:14.344 Max: 16 00:07:14.344 Min: 16 00:07:14.344 Number of Namespaces: 256 00:07:14.344 Compare Command: Supported 00:07:14.344 Write Uncorrectable Command: Not Supported 00:07:14.344 Dataset Management Command: Supported 00:07:14.344 Write Zeroes Command: Supported 00:07:14.344 Set Features Save Field: Supported 00:07:14.344 Reservations: Not Supported 00:07:14.344 Timestamp: Supported 00:07:14.344 Copy: Supported 00:07:14.344 Volatile Write Cache: Present 00:07:14.344 Atomic Write Unit (Normal): 1 00:07:14.344 Atomic Write Unit (PFail): 1 00:07:14.344 Atomic Compare & Write Unit: 1 00:07:14.344 Fused Compare & Write: Not Supported 00:07:14.344 Scatter-Gather List 00:07:14.344 SGL Command Set: Supported 00:07:14.344 SGL Keyed: Not Supported 00:07:14.344 SGL Bit Bucket Descriptor: Not Supported 00:07:14.344 SGL Metadata Pointer: Not Supported 00:07:14.344 Oversized SGL: Not Supported 00:07:14.344 SGL Metadata Address: Not Supported 00:07:14.344 SGL Offset: Not Supported 00:07:14.344 Transport SGL Data Block: Not Supported 00:07:14.344 Replay Protected Memory Block: Not Supported 00:07:14.344 00:07:14.344 Firmware Slot Information 00:07:14.344 ========================= 00:07:14.344 Active slot: 1 00:07:14.344 Slot 1 Firmware Revision: 1.0 00:07:14.344 00:07:14.344 00:07:14.344 Commands Supported and Effects 00:07:14.344 ============================== 00:07:14.344 Admin Commands 00:07:14.344 -------------- 00:07:14.344 Delete I/O Submission Queue (00h): Supported 00:07:14.344 Create I/O Submission Queue (01h): Supported 00:07:14.344 Get Log Page (02h): Supported 00:07:14.344 Delete I/O Completion Queue (04h): Supported 00:07:14.344 Create I/O Completion Queue (05h): Supported 00:07:14.344 Identify (06h): Supported 00:07:14.344 Abort (08h): Supported 00:07:14.344 Set Features (09h): Supported 00:07:14.344 Get Features (0Ah): Supported 00:07:14.344 Asynchronous Event Request (0Ch): Supported 00:07:14.344 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:14.344 Directive Send (19h): Supported 00:07:14.344 Directive Receive (1Ah): Supported 00:07:14.344 Virtualization Management (1Ch): Supported 00:07:14.344 Doorbell Buffer Config (7Ch): Supported 00:07:14.344 Format NVM (80h): Supported LBA-Change 00:07:14.344 I/O Commands 00:07:14.344 ------------ 00:07:14.344 Flush (00h): Supported LBA-Change 00:07:14.344 Write (01h): Supported LBA-Change 00:07:14.344 Read (02h): Supported 00:07:14.344 Compare (05h): Supported 00:07:14.344 Write Zeroes (08h): Supported LBA-Change 00:07:14.344 Dataset Management (09h): Supported LBA-Change 00:07:14.344 Unknown (0Ch): Supported 00:07:14.344 Unknown (12h): Supported 00:07:14.344 Copy (19h): Supported LBA-Change 00:07:14.344 Unknown (1Dh): Supported LBA-Change 00:07:14.344 00:07:14.344 Error Log 00:07:14.344 ========= 00:07:14.344 00:07:14.344 Arbitration 00:07:14.344 =========== 00:07:14.344 Arbitration Burst: no limit 00:07:14.344 00:07:14.344 Power Management 00:07:14.344 ================ 00:07:14.344 Number of Power States: 1 00:07:14.344 Current Power State: Power State #0 00:07:14.344 Power State #0: 00:07:14.344 Max Power: 25.00 W 00:07:14.344 Non-Operational State: Operational 00:07:14.344 Entry Latency: 16 microseconds 00:07:14.344 Exit Latency: 4 microseconds 00:07:14.344 Relative Read Throughput: 0 00:07:14.344 Relative Read Latency: 0 00:07:14.344 Relative Write Throughput: 0 00:07:14.344 Relative Write Latency: 0 00:07:14.344 Idle Power[2024-11-29 02:54:30.233220] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74269 terminated unexpected 00:07:14.344 [2024-11-29 02:54:30.234067] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74269 terminated unexpected 00:07:14.344 : Not Reported 00:07:14.344 Active Power: Not Reported 00:07:14.344 Non-Operational Permissive Mode: Not Supported 00:07:14.344 00:07:14.344 Health Information 00:07:14.344 ================== 00:07:14.344 Critical Warnings: 00:07:14.344 Available Spare Space: OK 00:07:14.344 Temperature: OK 00:07:14.344 Device Reliability: OK 00:07:14.344 Read Only: No 00:07:14.344 Volatile Memory Backup: OK 00:07:14.344 Current Temperature: 323 Kelvin (50 Celsius) 00:07:14.345 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:14.345 Available Spare: 0% 00:07:14.345 Available Spare Threshold: 0% 00:07:14.345 Life Percentage Used: 0% 00:07:14.345 Data Units Read: 649 00:07:14.345 Data Units Written: 577 00:07:14.345 Host Read Commands: 35210 00:07:14.345 Host Write Commands: 34996 00:07:14.345 Controller Busy Time: 0 minutes 00:07:14.345 Power Cycles: 0 00:07:14.345 Power On Hours: 0 hours 00:07:14.345 Unsafe Shutdowns: 0 00:07:14.345 Unrecoverable Media Errors: 0 00:07:14.345 Lifetime Error Log Entries: 0 00:07:14.345 Warning Temperature Time: 0 minutes 00:07:14.345 Critical Temperature Time: 0 minutes 00:07:14.345 00:07:14.345 Number of Queues 00:07:14.345 ================ 00:07:14.345 Number of I/O Submission Queues: 64 00:07:14.345 Number of I/O Completion Queues: 64 00:07:14.345 00:07:14.345 ZNS Specific Controller Data 00:07:14.345 ============================ 00:07:14.345 Zone Append Size Limit: 0 00:07:14.345 00:07:14.345 00:07:14.345 Active Namespaces 00:07:14.345 ================= 00:07:14.345 Namespace ID:1 00:07:14.345 Error Recovery Timeout: Unlimited 00:07:14.345 Command Set Identifier: NVM (00h) 00:07:14.345 Deallocate: Supported 00:07:14.345 Deallocated/Unwritten Error: Supported 00:07:14.345 Deallocated Read Value: All 0x00 00:07:14.345 Deallocate in Write Zeroes: Not Supported 00:07:14.345 Deallocated Guard Field: 0xFFFF 00:07:14.345 Flush: Supported 00:07:14.345 Reservation: Not Supported 00:07:14.345 Metadata Transferred as: Separate Metadata Buffer 00:07:14.345 Namespace Sharing Capabilities: Private 00:07:14.345 Size (in LBAs): 1548666 (5GiB) 00:07:14.345 Capacity (in LBAs): 1548666 (5GiB) 00:07:14.345 Utilization (in LBAs): 1548666 (5GiB) 00:07:14.345 Thin Provisioning: Not Supported 00:07:14.345 Per-NS Atomic Units: No 00:07:14.345 Maximum Single Source Range Length: 128 00:07:14.345 Maximum Copy Length: 128 00:07:14.345 Maximum Source Range Count: 128 00:07:14.345 NGUID/EUI64 Never Reused: No 00:07:14.345 Namespace Write Protected: No 00:07:14.345 Number of LBA Formats: 8 00:07:14.345 Current LBA Format: LBA Format #07 00:07:14.345 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:14.345 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:14.345 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:14.345 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:14.345 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:14.345 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:14.345 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:14.345 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:14.345 00:07:14.345 NVM Specific Namespace Data 00:07:14.345 =========================== 00:07:14.345 Logical Block Storage Tag Mask: 0 00:07:14.345 Protection Information Capabilities: 00:07:14.345 16b Guard Protection Information Storage Tag Support: No 00:07:14.345 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:14.345 Storage Tag Check Read Support: No 00:07:14.345 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.345 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.345 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.345 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.345 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.345 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.345 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.345 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.345 ===================================================== 00:07:14.345 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:14.345 ===================================================== 00:07:14.345 Controller Capabilities/Features 00:07:14.345 ================================ 00:07:14.345 Vendor ID: 1b36 00:07:14.345 Subsystem Vendor ID: 1af4 00:07:14.345 Serial Number: 12341 00:07:14.345 Model Number: QEMU NVMe Ctrl 00:07:14.345 Firmware Version: 8.0.0 00:07:14.345 Recommended Arb Burst: 6 00:07:14.345 IEEE OUI Identifier: 00 54 52 00:07:14.345 Multi-path I/O 00:07:14.345 May have multiple subsystem ports: No 00:07:14.345 May have multiple controllers: No 00:07:14.345 Associated with SR-IOV VF: No 00:07:14.345 Max Data Transfer Size: 524288 00:07:14.345 Max Number of Namespaces: 256 00:07:14.345 Max Number of I/O Queues: 64 00:07:14.345 NVMe Specification Version (VS): 1.4 00:07:14.345 NVMe Specification Version (Identify): 1.4 00:07:14.345 Maximum Queue Entries: 2048 00:07:14.345 Contiguous Queues Required: Yes 00:07:14.345 Arbitration Mechanisms Supported 00:07:14.345 Weighted Round Robin: Not Supported 00:07:14.345 Vendor Specific: Not Supported 00:07:14.345 Reset Timeout: 7500 ms 00:07:14.345 Doorbell Stride: 4 bytes 00:07:14.345 NVM Subsystem Reset: Not Supported 00:07:14.345 Command Sets Supported 00:07:14.345 NVM Command Set: Supported 00:07:14.345 Boot Partition: Not Supported 00:07:14.345 Memory Page Size Minimum: 4096 bytes 00:07:14.345 Memory Page Size Maximum: 65536 bytes 00:07:14.345 Persistent Memory Region: Not Supported 00:07:14.345 Optional Asynchronous Events Supported 00:07:14.345 Namespace Attribute Notices: Supported 00:07:14.345 Firmware Activation Notices: Not Supported 00:07:14.345 ANA Change Notices: Not Supported 00:07:14.345 PLE Aggregate Log Change Notices: Not Supported 00:07:14.345 LBA Status Info Alert Notices: Not Supported 00:07:14.345 EGE Aggregate Log Change Notices: Not Supported 00:07:14.345 Normal NVM Subsystem Shutdown event: Not Supported 00:07:14.345 Zone Descriptor Change Notices: Not Supported 00:07:14.345 Discovery Log Change Notices: Not Supported 00:07:14.345 Controller Attributes 00:07:14.345 128-bit Host Identifier: Not Supported 00:07:14.345 Non-Operational Permissive Mode: Not Supported 00:07:14.345 NVM Sets: Not Supported 00:07:14.345 Read Recovery Levels: Not Supported 00:07:14.345 Endurance Groups: Not Supported 00:07:14.345 Predictable Latency Mode: Not Supported 00:07:14.345 Traffic Based Keep ALive: Not Supported 00:07:14.345 Namespace Granularity: Not Supported 00:07:14.345 SQ Associations: Not Supported 00:07:14.345 UUID List: Not Supported 00:07:14.345 Multi-Domain Subsystem: Not Supported 00:07:14.345 Fixed Capacity Management: Not Supported 00:07:14.345 Variable Capacity Management: Not Supported 00:07:14.345 Delete Endurance Group: Not Supported 00:07:14.345 Delete NVM Set: Not Supported 00:07:14.345 Extended LBA Formats Supported: Supported 00:07:14.345 Flexible Data Placement Supported: Not Supported 00:07:14.345 00:07:14.345 Controller Memory Buffer Support 00:07:14.345 ================================ 00:07:14.345 Supported: No 00:07:14.345 00:07:14.345 Persistent Memory Region Support 00:07:14.345 ================================ 00:07:14.345 Supported: No 00:07:14.345 00:07:14.345 Admin Command Set Attributes 00:07:14.345 ============================ 00:07:14.345 Security Send/Receive: Not Supported 00:07:14.345 Format NVM: Supported 00:07:14.345 Firmware Activate/Download: Not Supported 00:07:14.345 Namespace Management: Supported 00:07:14.345 Device Self-Test: Not Supported 00:07:14.345 Directives: Supported 00:07:14.345 NVMe-MI: Not Supported 00:07:14.345 Virtualization Management: Not Supported 00:07:14.345 Doorbell Buffer Config: Supported 00:07:14.345 Get LBA Status Capability: Not Supported 00:07:14.345 Command & Feature Lockdown Capability: Not Supported 00:07:14.345 Abort Command Limit: 4 00:07:14.345 Async Event Request Limit: 4 00:07:14.345 Number of Firmware Slots: N/A 00:07:14.345 Firmware Slot 1 Read-Only: N/A 00:07:14.345 Firmware Activation Without Reset: N/A 00:07:14.345 Multiple Update Detection Support: N/A 00:07:14.345 Firmware Update Granularity: No Information Provided 00:07:14.345 Per-Namespace SMART Log: Yes 00:07:14.345 Asymmetric Namespace Access Log Page: Not Supported 00:07:14.345 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:14.345 Command Effects Log Page: Supported 00:07:14.345 Get Log Page Extended Data: Supported 00:07:14.345 Telemetry Log Pages: Not Supported 00:07:14.345 Persistent Event Log Pages: Not Supported 00:07:14.345 Supported Log Pages Log Page: May Support 00:07:14.345 Commands Supported & Effects Log Page: Not Supported 00:07:14.345 Feature Identifiers & Effects Log Page:May Support 00:07:14.345 NVMe-MI Commands & Effects Log Page: May Support 00:07:14.345 Data Area 4 for Telemetry Log: Not Supported 00:07:14.345 Error Log Page Entries Supported: 1 00:07:14.345 Keep Alive: Not Supported 00:07:14.345 00:07:14.345 NVM Command Set Attributes 00:07:14.345 ========================== 00:07:14.345 Submission Queue Entry Size 00:07:14.345 Max: 64 00:07:14.346 Min: 64 00:07:14.346 Completion Queue Entry Size 00:07:14.346 Max: 16 00:07:14.346 Min: 16 00:07:14.346 Number of Namespaces: 256 00:07:14.346 Compare Command: Supported 00:07:14.346 Write Uncorrectable Command: Not Supported 00:07:14.346 Dataset Management Command: Supported 00:07:14.346 Write Zeroes Command: Supported 00:07:14.346 Set Features Save Field: Supported 00:07:14.346 Reservations: Not Supported 00:07:14.346 Timestamp: Supported 00:07:14.346 Copy: Supported 00:07:14.346 Volatile Write Cache: Present 00:07:14.346 Atomic Write Unit (Normal): 1 00:07:14.346 Atomic Write Unit (PFail): 1 00:07:14.346 Atomic Compare & Write Unit: 1 00:07:14.346 Fused Compare & Write: Not Supported 00:07:14.346 Scatter-Gather List 00:07:14.346 SGL Command Set: Supported 00:07:14.346 SGL Keyed: Not Supported 00:07:14.346 SGL Bit Bucket Descriptor: Not Supported 00:07:14.346 SGL Metadata Pointer: Not Supported 00:07:14.346 Oversized SGL: Not Supported 00:07:14.346 SGL Metadata Address: Not Supported 00:07:14.346 SGL Offset: Not Supported 00:07:14.346 Transport SGL Data Block: Not Supported 00:07:14.346 Replay Protected Memory Block: Not Supported 00:07:14.346 00:07:14.346 Firmware Slot Information 00:07:14.346 ========================= 00:07:14.346 Active slot: 1 00:07:14.346 Slot 1 Firmware Revision: 1.0 00:07:14.346 00:07:14.346 00:07:14.346 Commands Supported and Effects 00:07:14.346 ============================== 00:07:14.346 Admin Commands 00:07:14.346 -------------- 00:07:14.346 Delete I/O Submission Queue (00h): Supported 00:07:14.346 Create I/O Submission Queue (01h): Supported 00:07:14.346 Get Log Page (02h): Supported 00:07:14.346 Delete I/O Completion Queue (04h): Supported 00:07:14.346 Create I/O Completion Queue (05h): Supported 00:07:14.346 Identify (06h): Supported 00:07:14.346 Abort (08h): Supported 00:07:14.346 Set Features (09h): Supported 00:07:14.346 Get Features (0Ah): Supported 00:07:14.346 Asynchronous Event Request (0Ch): Supported 00:07:14.346 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:14.346 Directive Send (19h): Supported 00:07:14.346 Directive Receive (1Ah): Supported 00:07:14.346 Virtualization Management (1Ch): Supported 00:07:14.346 Doorbell Buffer Config (7Ch): Supported 00:07:14.346 Format NVM (80h): Supported LBA-Change 00:07:14.346 I/O Commands 00:07:14.346 ------------ 00:07:14.346 Flush (00h): Supported LBA-Change 00:07:14.346 Write (01h): Supported LBA-Change 00:07:14.346 Read (02h): Supported 00:07:14.346 Compare (05h): Supported 00:07:14.346 Write Zeroes (08h): Supported LBA-Change 00:07:14.346 Dataset Management (09h): Supported LBA-Change 00:07:14.346 Unknown (0Ch): Supported 00:07:14.346 Unknown (12h): Supported 00:07:14.346 Copy (19h): Supported LBA-Change 00:07:14.346 Unknown (1Dh): Supported LBA-Change 00:07:14.346 00:07:14.346 Error Log 00:07:14.346 ========= 00:07:14.346 00:07:14.346 Arbitration 00:07:14.346 =========== 00:07:14.346 Arbitration Burst: no limit 00:07:14.346 00:07:14.346 Power Management 00:07:14.346 ================ 00:07:14.346 Number of Power States: 1 00:07:14.346 Current Power State: Power State #0 00:07:14.346 Power State #0: 00:07:14.346 Max Power: 25.00 W 00:07:14.346 Non-Operational State: Operational 00:07:14.346 Entry Latency: 16 microseconds 00:07:14.346 Exit Latency: 4 microseconds 00:07:14.346 Relative Read Throughput: 0 00:07:14.346 Relative Read Latency: 0 00:07:14.346 Relative Write Throughput: 0 00:07:14.346 Relative Write Latency: 0 00:07:14.346 Idle Power: Not Reported 00:07:14.346 Active Power: Not Reported 00:07:14.346 Non-Operational Permissive Mode: Not Supported 00:07:14.346 00:07:14.346 Health Information 00:07:14.346 ================== 00:07:14.346 Critical Warnings: 00:07:14.346 Available Spare Space: OK 00:07:14.346 Temperature: [2024-11-29 02:54:30.234753] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74269 terminated unexpected 00:07:14.346 OK 00:07:14.346 Device Reliability: OK 00:07:14.346 Read Only: No 00:07:14.346 Volatile Memory Backup: OK 00:07:14.346 Current Temperature: 323 Kelvin (50 Celsius) 00:07:14.346 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:14.346 Available Spare: 0% 00:07:14.346 Available Spare Threshold: 0% 00:07:14.346 Life Percentage Used: 0% 00:07:14.346 Data Units Read: 963 00:07:14.346 Data Units Written: 836 00:07:14.346 Host Read Commands: 51442 00:07:14.346 Host Write Commands: 50346 00:07:14.346 Controller Busy Time: 0 minutes 00:07:14.346 Power Cycles: 0 00:07:14.346 Power On Hours: 0 hours 00:07:14.346 Unsafe Shutdowns: 0 00:07:14.346 Unrecoverable Media Errors: 0 00:07:14.346 Lifetime Error Log Entries: 0 00:07:14.346 Warning Temperature Time: 0 minutes 00:07:14.346 Critical Temperature Time: 0 minutes 00:07:14.346 00:07:14.346 Number of Queues 00:07:14.346 ================ 00:07:14.346 Number of I/O Submission Queues: 64 00:07:14.346 Number of I/O Completion Queues: 64 00:07:14.346 00:07:14.346 ZNS Specific Controller Data 00:07:14.346 ============================ 00:07:14.346 Zone Append Size Limit: 0 00:07:14.346 00:07:14.346 00:07:14.346 Active Namespaces 00:07:14.346 ================= 00:07:14.346 Namespace ID:1 00:07:14.346 Error Recovery Timeout: Unlimited 00:07:14.346 Command Set Identifier: NVM (00h) 00:07:14.346 Deallocate: Supported 00:07:14.346 Deallocated/Unwritten Error: Supported 00:07:14.346 Deallocated Read Value: All 0x00 00:07:14.346 Deallocate in Write Zeroes: Not Supported 00:07:14.346 Deallocated Guard Field: 0xFFFF 00:07:14.346 Flush: Supported 00:07:14.346 Reservation: Not Supported 00:07:14.346 Namespace Sharing Capabilities: Private 00:07:14.346 Size (in LBAs): 1310720 (5GiB) 00:07:14.346 Capacity (in LBAs): 1310720 (5GiB) 00:07:14.346 Utilization (in LBAs): 1310720 (5GiB) 00:07:14.346 Thin Provisioning: Not Supported 00:07:14.346 Per-NS Atomic Units: No 00:07:14.346 Maximum Single Source Range Length: 128 00:07:14.346 Maximum Copy Length: 128 00:07:14.346 Maximum Source Range Count: 128 00:07:14.346 NGUID/EUI64 Never Reused: No 00:07:14.346 Namespace Write Protected: No 00:07:14.346 Number of LBA Formats: 8 00:07:14.346 Current LBA Format: LBA Format #04 00:07:14.346 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:14.346 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:14.346 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:14.346 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:14.346 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:14.346 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:14.346 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:14.346 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:14.346 00:07:14.346 NVM Specific Namespace Data 00:07:14.346 =========================== 00:07:14.346 Logical Block Storage Tag Mask: 0 00:07:14.346 Protection Information Capabilities: 00:07:14.346 16b Guard Protection Information Storage Tag Support: No 00:07:14.346 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:14.346 Storage Tag Check Read Support: No 00:07:14.346 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.346 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.346 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.346 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.346 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.346 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.346 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.346 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.346 ===================================================== 00:07:14.346 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:14.346 ===================================================== 00:07:14.346 Controller Capabilities/Features 00:07:14.346 ================================ 00:07:14.346 Vendor ID: 1b36 00:07:14.346 Subsystem Vendor ID: 1af4 00:07:14.346 Serial Number: 12343 00:07:14.346 Model Number: QEMU NVMe Ctrl 00:07:14.346 Firmware Version: 8.0.0 00:07:14.346 Recommended Arb Burst: 6 00:07:14.346 IEEE OUI Identifier: 00 54 52 00:07:14.346 Multi-path I/O 00:07:14.346 May have multiple subsystem ports: No 00:07:14.346 May have multiple controllers: Yes 00:07:14.346 Associated with SR-IOV VF: No 00:07:14.346 Max Data Transfer Size: 524288 00:07:14.346 Max Number of Namespaces: 256 00:07:14.346 Max Number of I/O Queues: 64 00:07:14.346 NVMe Specification Version (VS): 1.4 00:07:14.346 NVMe Specification Version (Identify): 1.4 00:07:14.346 Maximum Queue Entries: 2048 00:07:14.346 Contiguous Queues Required: Yes 00:07:14.346 Arbitration Mechanisms Supported 00:07:14.347 Weighted Round Robin: Not Supported 00:07:14.347 Vendor Specific: Not Supported 00:07:14.347 Reset Timeout: 7500 ms 00:07:14.347 Doorbell Stride: 4 bytes 00:07:14.347 NVM Subsystem Reset: Not Supported 00:07:14.347 Command Sets Supported 00:07:14.347 NVM Command Set: Supported 00:07:14.347 Boot Partition: Not Supported 00:07:14.347 Memory Page Size Minimum: 4096 bytes 00:07:14.347 Memory Page Size Maximum: 65536 bytes 00:07:14.347 Persistent Memory Region: Not Supported 00:07:14.347 Optional Asynchronous Events Supported 00:07:14.347 Namespace Attribute Notices: Supported 00:07:14.347 Firmware Activation Notices: Not Supported 00:07:14.347 ANA Change Notices: Not Supported 00:07:14.347 PLE Aggregate Log Change Notices: Not Supported 00:07:14.347 LBA Status Info Alert Notices: Not Supported 00:07:14.347 EGE Aggregate Log Change Notices: Not Supported 00:07:14.347 Normal NVM Subsystem Shutdown event: Not Supported 00:07:14.347 Zone Descriptor Change Notices: Not Supported 00:07:14.347 Discovery Log Change Notices: Not Supported 00:07:14.347 Controller Attributes 00:07:14.347 128-bit Host Identifier: Not Supported 00:07:14.347 Non-Operational Permissive Mode: Not Supported 00:07:14.347 NVM Sets: Not Supported 00:07:14.347 Read Recovery Levels: Not Supported 00:07:14.347 Endurance Groups: Supported 00:07:14.347 Predictable Latency Mode: Not Supported 00:07:14.347 Traffic Based Keep ALive: Not Supported 00:07:14.347 Namespace Granularity: Not Supported 00:07:14.347 SQ Associations: Not Supported 00:07:14.347 UUID List: Not Supported 00:07:14.347 Multi-Domain Subsystem: Not Supported 00:07:14.347 Fixed Capacity Management: Not Supported 00:07:14.347 Variable Capacity Management: Not Supported 00:07:14.347 Delete Endurance Group: Not Supported 00:07:14.347 Delete NVM Set: Not Supported 00:07:14.347 Extended LBA Formats Supported: Supported 00:07:14.347 Flexible Data Placement Supported: Supported 00:07:14.347 00:07:14.347 Controller Memory Buffer Support 00:07:14.347 ================================ 00:07:14.347 Supported: No 00:07:14.347 00:07:14.347 Persistent Memory Region Support 00:07:14.347 ================================ 00:07:14.347 Supported: No 00:07:14.347 00:07:14.347 Admin Command Set Attributes 00:07:14.347 ============================ 00:07:14.347 Security Send/Receive: Not Supported 00:07:14.347 Format NVM: Supported 00:07:14.347 Firmware Activate/Download: Not Supported 00:07:14.347 Namespace Management: Supported 00:07:14.347 Device Self-Test: Not Supported 00:07:14.347 Directives: Supported 00:07:14.347 NVMe-MI: Not Supported 00:07:14.347 Virtualization Management: Not Supported 00:07:14.347 Doorbell Buffer Config: Supported 00:07:14.347 Get LBA Status Capability: Not Supported 00:07:14.347 Command & Feature Lockdown Capability: Not Supported 00:07:14.347 Abort Command Limit: 4 00:07:14.347 Async Event Request Limit: 4 00:07:14.347 Number of Firmware Slots: N/A 00:07:14.347 Firmware Slot 1 Read-Only: N/A 00:07:14.347 Firmware Activation Without Reset: N/A 00:07:14.347 Multiple Update Detection Support: N/A 00:07:14.347 Firmware Update Granularity: No Information Provided 00:07:14.347 Per-Namespace SMART Log: Yes 00:07:14.347 Asymmetric Namespace Access Log Page: Not Supported 00:07:14.347 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:14.347 Command Effects Log Page: Supported 00:07:14.347 Get Log Page Extended Data: Supported 00:07:14.347 Telemetry Log Pages: Not Supported 00:07:14.347 Persistent Event Log Pages: Not Supported 00:07:14.347 Supported Log Pages Log Page: May Support 00:07:14.347 Commands Supported & Effects Log Page: Not Supported 00:07:14.347 Feature Identifiers & Effects Log Page:May Support 00:07:14.347 NVMe-MI Commands & Effects Log Page: May Support 00:07:14.347 Data Area 4 for Telemetry Log: Not Supported 00:07:14.347 Error Log Page Entries Supported: 1 00:07:14.347 Keep Alive: Not Supported 00:07:14.347 00:07:14.347 NVM Command Set Attributes 00:07:14.347 ========================== 00:07:14.347 Submission Queue Entry Size 00:07:14.347 Max: 64 00:07:14.347 Min: 64 00:07:14.347 Completion Queue Entry Size 00:07:14.347 Max: 16 00:07:14.347 Min: 16 00:07:14.347 Number of Namespaces: 256 00:07:14.347 Compare Command: Supported 00:07:14.347 Write Uncorrectable Command: Not Supported 00:07:14.347 Dataset Management Command: Supported 00:07:14.347 Write Zeroes Command: Supported 00:07:14.347 Set Features Save Field: Supported 00:07:14.347 Reservations: Not Supported 00:07:14.347 Timestamp: Supported 00:07:14.347 Copy: Supported 00:07:14.347 Volatile Write Cache: Present 00:07:14.347 Atomic Write Unit (Normal): 1 00:07:14.347 Atomic Write Unit (PFail): 1 00:07:14.347 Atomic Compare & Write Unit: 1 00:07:14.347 Fused Compare & Write: Not Supported 00:07:14.347 Scatter-Gather List 00:07:14.347 SGL Command Set: Supported 00:07:14.347 SGL Keyed: Not Supported 00:07:14.347 SGL Bit Bucket Descriptor: Not Supported 00:07:14.347 SGL Metadata Pointer: Not Supported 00:07:14.347 Oversized SGL: Not Supported 00:07:14.347 SGL Metadata Address: Not Supported 00:07:14.347 SGL Offset: Not Supported 00:07:14.347 Transport SGL Data Block: Not Supported 00:07:14.347 Replay Protected Memory Block: Not Supported 00:07:14.347 00:07:14.347 Firmware Slot Information 00:07:14.347 ========================= 00:07:14.347 Active slot: 1 00:07:14.347 Slot 1 Firmware Revision: 1.0 00:07:14.347 00:07:14.347 00:07:14.347 Commands Supported and Effects 00:07:14.347 ============================== 00:07:14.347 Admin Commands 00:07:14.347 -------------- 00:07:14.347 Delete I/O Submission Queue (00h): Supported 00:07:14.347 Create I/O Submission Queue (01h): Supported 00:07:14.347 Get Log Page (02h): Supported 00:07:14.347 Delete I/O Completion Queue (04h): Supported 00:07:14.347 Create I/O Completion Queue (05h): Supported 00:07:14.347 Identify (06h): Supported 00:07:14.347 Abort (08h): Supported 00:07:14.347 Set Features (09h): Supported 00:07:14.347 Get Features (0Ah): Supported 00:07:14.347 Asynchronous Event Request (0Ch): Supported 00:07:14.347 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:14.347 Directive Send (19h): Supported 00:07:14.347 Directive Receive (1Ah): Supported 00:07:14.347 Virtualization Management (1Ch): Supported 00:07:14.347 Doorbell Buffer Config (7Ch): Supported 00:07:14.347 Format NVM (80h): Supported LBA-Change 00:07:14.347 I/O Commands 00:07:14.347 ------------ 00:07:14.347 Flush (00h): Supported LBA-Change 00:07:14.347 Write (01h): Supported LBA-Change 00:07:14.347 Read (02h): Supported 00:07:14.347 Compare (05h): Supported 00:07:14.347 Write Zeroes (08h): Supported LBA-Change 00:07:14.347 Dataset Management (09h): Supported LBA-Change 00:07:14.347 Unknown (0Ch): Supported 00:07:14.347 Unknown (12h): Supported 00:07:14.347 Copy (19h): Supported LBA-Change 00:07:14.347 Unknown (1Dh): Supported LBA-Change 00:07:14.347 00:07:14.347 Error Log 00:07:14.347 ========= 00:07:14.347 00:07:14.347 Arbitration 00:07:14.347 =========== 00:07:14.347 Arbitration Burst: no limit 00:07:14.347 00:07:14.347 Power Management 00:07:14.347 ================ 00:07:14.347 Number of Power States: 1 00:07:14.347 Current Power State: Power State #0 00:07:14.347 Power State #0: 00:07:14.347 Max Power: 25.00 W 00:07:14.347 Non-Operational State: Operational 00:07:14.347 Entry Latency: 16 microseconds 00:07:14.347 Exit Latency: 4 microseconds 00:07:14.347 Relative Read Throughput: 0 00:07:14.347 Relative Read Latency: 0 00:07:14.347 Relative Write Throughput: 0 00:07:14.347 Relative Write Latency: 0 00:07:14.347 Idle Power: Not Reported 00:07:14.347 Active Power: Not Reported 00:07:14.347 Non-Operational Permissive Mode: Not Supported 00:07:14.347 00:07:14.347 Health Information 00:07:14.347 ================== 00:07:14.347 Critical Warnings: 00:07:14.347 Available Spare Space: OK 00:07:14.347 Temperature: OK 00:07:14.347 Device Reliability: OK 00:07:14.347 Read Only: No 00:07:14.347 Volatile Memory Backup: OK 00:07:14.347 Current Temperature: 323 Kelvin (50 Celsius) 00:07:14.347 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:14.347 Available Spare: 0% 00:07:14.347 Available Spare Threshold: 0% 00:07:14.347 Life Percentage Used: 0% 00:07:14.347 Data Units Read: 765 00:07:14.347 Data Units Written: 694 00:07:14.347 Host Read Commands: 36832 00:07:14.347 Host Write Commands: 36258 00:07:14.347 Controller Busy Time: 0 minutes 00:07:14.347 Power Cycles: 0 00:07:14.348 Power On Hours: 0 hours 00:07:14.348 Unsafe Shutdowns: 0 00:07:14.348 Unrecoverable Media Errors: 0 00:07:14.348 Lifetime Error Log Entries: 0 00:07:14.348 Warning Temperature Time: 0 minutes 00:07:14.348 Critical Temperature Time: 0 minutes 00:07:14.348 00:07:14.348 Number of Queues 00:07:14.348 ================ 00:07:14.348 Number of I/O Submission Queues: 64 00:07:14.348 Number of I/O Completion Queues: 64 00:07:14.348 00:07:14.348 ZNS Specific Controller Data 00:07:14.348 ============================ 00:07:14.348 Zone Append Size Limit: 0 00:07:14.348 00:07:14.348 00:07:14.348 Active Namespaces 00:07:14.348 ================= 00:07:14.348 Namespace ID:1 00:07:14.348 Error Recovery Timeout: Unlimited 00:07:14.348 Command Set Identifier: NVM (00h) 00:07:14.348 Deallocate: Supported 00:07:14.348 Deallocated/Unwritten Error: Supported 00:07:14.348 Deallocated Read Value: All 0x00 00:07:14.348 Deallocate in Write Zeroes: Not Supported 00:07:14.348 Deallocated Guard Field: 0xFFFF 00:07:14.348 Flush: Supported 00:07:14.348 Reservation: Not Supported 00:07:14.348 Namespace Sharing Capabilities: Multiple Controllers 00:07:14.348 Size (in LBAs): 262144 (1GiB) 00:07:14.348 Capacity (in LBAs): 262144 (1GiB) 00:07:14.348 Utilization (in LBAs): 262144 (1GiB) 00:07:14.348 Thin Provisioning: Not Supported 00:07:14.348 Per-NS Atomic Units: No 00:07:14.348 Maximum Single Source Range Length: 128 00:07:14.348 Maximum Copy Length: 128 00:07:14.348 Maximum Source Range Count: 128 00:07:14.348 NGUID/EUI64 Never Reused: No 00:07:14.348 Namespace Write Protected: No 00:07:14.348 Endurance group ID: 1 00:07:14.348 Number of LBA Formats: 8 00:07:14.348 Current LBA Format: LBA Format #04 00:07:14.348 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:14.348 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:14.348 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:14.348 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:14.348 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:14.348 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:14.348 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:14.348 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:14.348 00:07:14.348 Get Feature FDP: 00:07:14.348 ================ 00:07:14.348 Enabled: Yes 00:07:14.348 FDP configuration index: 0 00:07:14.348 00:07:14.348 FDP configurations log page 00:07:14.348 =========================== 00:07:14.348 Number of FDP configurations: 1 00:07:14.348 Version: 0 00:07:14.348 Size: 112 00:07:14.348 FDP Configuration Descriptor: 0 00:07:14.348 Descriptor Size: 96 00:07:14.348 Reclaim Group Identifier format: 2 00:07:14.348 FDP Volatile Write Cache: Not Present 00:07:14.348 FDP Configuration: Valid 00:07:14.348 Vendor Specific Size: 0 00:07:14.348 Number of Reclaim Groups: 2 00:07:14.348 Number of Recalim Unit Handles: 8 00:07:14.348 Max Placement Identifiers: 128 00:07:14.348 Number of Namespaces Suppprted: 256 00:07:14.348 Reclaim unit Nominal Size: 6000000 bytes 00:07:14.348 Estimated Reclaim Unit Time Limit: Not Reported 00:07:14.348 RUH Desc #000: RUH Type: Initially Isolated 00:07:14.348 RUH Desc #001: RUH Type: Initially Isolated 00:07:14.348 RUH Desc #002: RUH Type: Initially Isolated 00:07:14.348 RUH Desc #003: RUH Type: Initially Isolated 00:07:14.348 RUH Desc #004: RUH Type: Initially Isolated 00:07:14.348 RUH Desc #005: RUH Type: Initially Isolated 00:07:14.348 RUH Desc #006: RUH Type: Initially Isolated 00:07:14.348 RUH Desc #007: RUH Type: Initially Isolated 00:07:14.348 00:07:14.348 FDP reclaim unit handle usage log page 00:07:14.348 ====================================== 00:07:14.348 Number of Reclaim Unit Handles: 8 00:07:14.348 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:14.348 RUH Usage Desc #001: RUH Attributes: Unused 00:07:14.348 RUH Usage Desc #002: RUH Attributes: Unused 00:07:14.348 RUH Usage Desc #003: RUH Attributes: Unused 00:07:14.348 RUH Usage Desc #004: RUH Attributes: Unused 00:07:14.348 RUH Usage Desc #005: RUH Attributes: Unused 00:07:14.348 RUH Usage Desc #006: RUH Attributes: Unused 00:07:14.348 RUH Usage Desc #007: RUH Attributes: Unused 00:07:14.348 00:07:14.348 FDP statistics log page 00:07:14.348 ======================= 00:07:14.348 Host bytes with metadata written: 443850752 00:07:14.348 Media[2024-11-29 02:54:30.236151] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74269 terminated unexpected 00:07:14.348 bytes with metadata written: 443904000 00:07:14.348 Media bytes erased: 0 00:07:14.348 00:07:14.348 FDP events log page 00:07:14.348 =================== 00:07:14.348 Number of FDP events: 0 00:07:14.348 00:07:14.348 NVM Specific Namespace Data 00:07:14.348 =========================== 00:07:14.348 Logical Block Storage Tag Mask: 0 00:07:14.348 Protection Information Capabilities: 00:07:14.348 16b Guard Protection Information Storage Tag Support: No 00:07:14.348 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:14.348 Storage Tag Check Read Support: No 00:07:14.348 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.348 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.348 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.348 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.348 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.348 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.348 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.348 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.348 ===================================================== 00:07:14.348 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:14.348 ===================================================== 00:07:14.348 Controller Capabilities/Features 00:07:14.348 ================================ 00:07:14.348 Vendor ID: 1b36 00:07:14.348 Subsystem Vendor ID: 1af4 00:07:14.348 Serial Number: 12342 00:07:14.348 Model Number: QEMU NVMe Ctrl 00:07:14.348 Firmware Version: 8.0.0 00:07:14.348 Recommended Arb Burst: 6 00:07:14.348 IEEE OUI Identifier: 00 54 52 00:07:14.348 Multi-path I/O 00:07:14.348 May have multiple subsystem ports: No 00:07:14.348 May have multiple controllers: No 00:07:14.348 Associated with SR-IOV VF: No 00:07:14.348 Max Data Transfer Size: 524288 00:07:14.348 Max Number of Namespaces: 256 00:07:14.348 Max Number of I/O Queues: 64 00:07:14.348 NVMe Specification Version (VS): 1.4 00:07:14.348 NVMe Specification Version (Identify): 1.4 00:07:14.348 Maximum Queue Entries: 2048 00:07:14.348 Contiguous Queues Required: Yes 00:07:14.348 Arbitration Mechanisms Supported 00:07:14.348 Weighted Round Robin: Not Supported 00:07:14.348 Vendor Specific: Not Supported 00:07:14.348 Reset Timeout: 7500 ms 00:07:14.348 Doorbell Stride: 4 bytes 00:07:14.348 NVM Subsystem Reset: Not Supported 00:07:14.348 Command Sets Supported 00:07:14.348 NVM Command Set: Supported 00:07:14.348 Boot Partition: Not Supported 00:07:14.348 Memory Page Size Minimum: 4096 bytes 00:07:14.348 Memory Page Size Maximum: 65536 bytes 00:07:14.348 Persistent Memory Region: Not Supported 00:07:14.348 Optional Asynchronous Events Supported 00:07:14.348 Namespace Attribute Notices: Supported 00:07:14.348 Firmware Activation Notices: Not Supported 00:07:14.348 ANA Change Notices: Not Supported 00:07:14.348 PLE Aggregate Log Change Notices: Not Supported 00:07:14.348 LBA Status Info Alert Notices: Not Supported 00:07:14.348 EGE Aggregate Log Change Notices: Not Supported 00:07:14.348 Normal NVM Subsystem Shutdown event: Not Supported 00:07:14.348 Zone Descriptor Change Notices: Not Supported 00:07:14.348 Discovery Log Change Notices: Not Supported 00:07:14.349 Controller Attributes 00:07:14.349 128-bit Host Identifier: Not Supported 00:07:14.349 Non-Operational Permissive Mode: Not Supported 00:07:14.349 NVM Sets: Not Supported 00:07:14.349 Read Recovery Levels: Not Supported 00:07:14.349 Endurance Groups: Not Supported 00:07:14.349 Predictable Latency Mode: Not Supported 00:07:14.349 Traffic Based Keep ALive: Not Supported 00:07:14.349 Namespace Granularity: Not Supported 00:07:14.349 SQ Associations: Not Supported 00:07:14.349 UUID List: Not Supported 00:07:14.349 Multi-Domain Subsystem: Not Supported 00:07:14.349 Fixed Capacity Management: Not Supported 00:07:14.349 Variable Capacity Management: Not Supported 00:07:14.349 Delete Endurance Group: Not Supported 00:07:14.349 Delete NVM Set: Not Supported 00:07:14.349 Extended LBA Formats Supported: Supported 00:07:14.349 Flexible Data Placement Supported: Not Supported 00:07:14.349 00:07:14.349 Controller Memory Buffer Support 00:07:14.349 ================================ 00:07:14.349 Supported: No 00:07:14.349 00:07:14.349 Persistent Memory Region Support 00:07:14.349 ================================ 00:07:14.349 Supported: No 00:07:14.349 00:07:14.349 Admin Command Set Attributes 00:07:14.349 ============================ 00:07:14.349 Security Send/Receive: Not Supported 00:07:14.349 Format NVM: Supported 00:07:14.349 Firmware Activate/Download: Not Supported 00:07:14.349 Namespace Management: Supported 00:07:14.349 Device Self-Test: Not Supported 00:07:14.349 Directives: Supported 00:07:14.349 NVMe-MI: Not Supported 00:07:14.349 Virtualization Management: Not Supported 00:07:14.349 Doorbell Buffer Config: Supported 00:07:14.349 Get LBA Status Capability: Not Supported 00:07:14.349 Command & Feature Lockdown Capability: Not Supported 00:07:14.349 Abort Command Limit: 4 00:07:14.349 Async Event Request Limit: 4 00:07:14.349 Number of Firmware Slots: N/A 00:07:14.349 Firmware Slot 1 Read-Only: N/A 00:07:14.349 Firmware Activation Without Reset: N/A 00:07:14.349 Multiple Update Detection Support: N/A 00:07:14.349 Firmware Update Granularity: No Information Provided 00:07:14.349 Per-Namespace SMART Log: Yes 00:07:14.349 Asymmetric Namespace Access Log Page: Not Supported 00:07:14.349 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:14.349 Command Effects Log Page: Supported 00:07:14.349 Get Log Page Extended Data: Supported 00:07:14.349 Telemetry Log Pages: Not Supported 00:07:14.349 Persistent Event Log Pages: Not Supported 00:07:14.349 Supported Log Pages Log Page: May Support 00:07:14.349 Commands Supported & Effects Log Page: Not Supported 00:07:14.349 Feature Identifiers & Effects Log Page:May Support 00:07:14.349 NVMe-MI Commands & Effects Log Page: May Support 00:07:14.349 Data Area 4 for Telemetry Log: Not Supported 00:07:14.349 Error Log Page Entries Supported: 1 00:07:14.349 Keep Alive: Not Supported 00:07:14.349 00:07:14.349 NVM Command Set Attributes 00:07:14.349 ========================== 00:07:14.349 Submission Queue Entry Size 00:07:14.349 Max: 64 00:07:14.349 Min: 64 00:07:14.349 Completion Queue Entry Size 00:07:14.349 Max: 16 00:07:14.349 Min: 16 00:07:14.349 Number of Namespaces: 256 00:07:14.349 Compare Command: Supported 00:07:14.349 Write Uncorrectable Command: Not Supported 00:07:14.349 Dataset Management Command: Supported 00:07:14.349 Write Zeroes Command: Supported 00:07:14.349 Set Features Save Field: Supported 00:07:14.349 Reservations: Not Supported 00:07:14.349 Timestamp: Supported 00:07:14.349 Copy: Supported 00:07:14.349 Volatile Write Cache: Present 00:07:14.349 Atomic Write Unit (Normal): 1 00:07:14.349 Atomic Write Unit (PFail): 1 00:07:14.349 Atomic Compare & Write Unit: 1 00:07:14.349 Fused Compare & Write: Not Supported 00:07:14.349 Scatter-Gather List 00:07:14.349 SGL Command Set: Supported 00:07:14.349 SGL Keyed: Not Supported 00:07:14.349 SGL Bit Bucket Descriptor: Not Supported 00:07:14.349 SGL Metadata Pointer: Not Supported 00:07:14.349 Oversized SGL: Not Supported 00:07:14.349 SGL Metadata Address: Not Supported 00:07:14.349 SGL Offset: Not Supported 00:07:14.349 Transport SGL Data Block: Not Supported 00:07:14.349 Replay Protected Memory Block: Not Supported 00:07:14.349 00:07:14.349 Firmware Slot Information 00:07:14.349 ========================= 00:07:14.349 Active slot: 1 00:07:14.349 Slot 1 Firmware Revision: 1.0 00:07:14.349 00:07:14.349 00:07:14.349 Commands Supported and Effects 00:07:14.349 ============================== 00:07:14.349 Admin Commands 00:07:14.349 -------------- 00:07:14.349 Delete I/O Submission Queue (00h): Supported 00:07:14.349 Create I/O Submission Queue (01h): Supported 00:07:14.349 Get Log Page (02h): Supported 00:07:14.349 Delete I/O Completion Queue (04h): Supported 00:07:14.349 Create I/O Completion Queue (05h): Supported 00:07:14.349 Identify (06h): Supported 00:07:14.349 Abort (08h): Supported 00:07:14.349 Set Features (09h): Supported 00:07:14.349 Get Features (0Ah): Supported 00:07:14.349 Asynchronous Event Request (0Ch): Supported 00:07:14.349 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:14.349 Directive Send (19h): Supported 00:07:14.349 Directive Receive (1Ah): Supported 00:07:14.349 Virtualization Management (1Ch): Supported 00:07:14.349 Doorbell Buffer Config (7Ch): Supported 00:07:14.349 Format NVM (80h): Supported LBA-Change 00:07:14.349 I/O Commands 00:07:14.349 ------------ 00:07:14.349 Flush (00h): Supported LBA-Change 00:07:14.349 Write (01h): Supported LBA-Change 00:07:14.349 Read (02h): Supported 00:07:14.349 Compare (05h): Supported 00:07:14.349 Write Zeroes (08h): Supported LBA-Change 00:07:14.349 Dataset Management (09h): Supported LBA-Change 00:07:14.349 Unknown (0Ch): Supported 00:07:14.349 Unknown (12h): Supported 00:07:14.349 Copy (19h): Supported LBA-Change 00:07:14.349 Unknown (1Dh): Supported LBA-Change 00:07:14.349 00:07:14.349 Error Log 00:07:14.349 ========= 00:07:14.349 00:07:14.349 Arbitration 00:07:14.349 =========== 00:07:14.349 Arbitration Burst: no limit 00:07:14.349 00:07:14.349 Power Management 00:07:14.349 ================ 00:07:14.349 Number of Power States: 1 00:07:14.349 Current Power State: Power State #0 00:07:14.349 Power State #0: 00:07:14.349 Max Power: 25.00 W 00:07:14.349 Non-Operational State: Operational 00:07:14.349 Entry Latency: 16 microseconds 00:07:14.349 Exit Latency: 4 microseconds 00:07:14.349 Relative Read Throughput: 0 00:07:14.349 Relative Read Latency: 0 00:07:14.349 Relative Write Throughput: 0 00:07:14.349 Relative Write Latency: 0 00:07:14.349 Idle Power: Not Reported 00:07:14.349 Active Power: Not Reported 00:07:14.349 Non-Operational Permissive Mode: Not Supported 00:07:14.349 00:07:14.349 Health Information 00:07:14.349 ================== 00:07:14.349 Critical Warnings: 00:07:14.349 Available Spare Space: OK 00:07:14.349 Temperature: OK 00:07:14.349 Device Reliability: OK 00:07:14.349 Read Only: No 00:07:14.349 Volatile Memory Backup: OK 00:07:14.349 Current Temperature: 323 Kelvin (50 Celsius) 00:07:14.349 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:14.349 Available Spare: 0% 00:07:14.349 Available Spare Threshold: 0% 00:07:14.349 Life Percentage Used: 0% 00:07:14.349 Data Units Read: 2071 00:07:14.349 Data Units Written: 1858 00:07:14.349 Host Read Commands: 108158 00:07:14.349 Host Write Commands: 106428 00:07:14.349 Controller Busy Time: 0 minutes 00:07:14.349 Power Cycles: 0 00:07:14.349 Power On Hours: 0 hours 00:07:14.349 Unsafe Shutdowns: 0 00:07:14.349 Unrecoverable Media Errors: 0 00:07:14.349 Lifetime Error Log Entries: 0 00:07:14.349 Warning Temperature Time: 0 minutes 00:07:14.349 Critical Temperature Time: 0 minutes 00:07:14.349 00:07:14.349 Number of Queues 00:07:14.349 ================ 00:07:14.349 Number of I/O Submission Queues: 64 00:07:14.349 Number of I/O Completion Queues: 64 00:07:14.349 00:07:14.349 ZNS Specific Controller Data 00:07:14.349 ============================ 00:07:14.349 Zone Append Size Limit: 0 00:07:14.349 00:07:14.349 00:07:14.349 Active Namespaces 00:07:14.349 ================= 00:07:14.349 Namespace ID:1 00:07:14.349 Error Recovery Timeout: Unlimited 00:07:14.349 Command Set Identifier: NVM (00h) 00:07:14.349 Deallocate: Supported 00:07:14.349 Deallocated/Unwritten Error: Supported 00:07:14.349 Deallocated Read Value: All 0x00 00:07:14.350 Deallocate in Write Zeroes: Not Supported 00:07:14.350 Deallocated Guard Field: 0xFFFF 00:07:14.350 Flush: Supported 00:07:14.350 Reservation: Not Supported 00:07:14.350 Namespace Sharing Capabilities: Private 00:07:14.350 Size (in LBAs): 1048576 (4GiB) 00:07:14.350 Capacity (in LBAs): 1048576 (4GiB) 00:07:14.350 Utilization (in LBAs): 1048576 (4GiB) 00:07:14.350 Thin Provisioning: Not Supported 00:07:14.350 Per-NS Atomic Units: No 00:07:14.350 Maximum Single Source Range Length: 128 00:07:14.350 Maximum Copy Length: 128 00:07:14.350 Maximum Source Range Count: 128 00:07:14.350 NGUID/EUI64 Never Reused: No 00:07:14.350 Namespace Write Protected: No 00:07:14.350 Number of LBA Formats: 8 00:07:14.350 Current LBA Format: LBA Format #04 00:07:14.350 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:14.350 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:14.350 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:14.350 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:14.350 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:14.350 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:14.350 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:14.350 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:14.350 00:07:14.350 NVM Specific Namespace Data 00:07:14.350 =========================== 00:07:14.350 Logical Block Storage Tag Mask: 0 00:07:14.350 Protection Information Capabilities: 00:07:14.350 16b Guard Protection Information Storage Tag Support: No 00:07:14.350 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:14.350 Storage Tag Check Read Support: No 00:07:14.350 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Namespace ID:2 00:07:14.350 Error Recovery Timeout: Unlimited 00:07:14.350 Command Set Identifier: NVM (00h) 00:07:14.350 Deallocate: Supported 00:07:14.350 Deallocated/Unwritten Error: Supported 00:07:14.350 Deallocated Read Value: All 0x00 00:07:14.350 Deallocate in Write Zeroes: Not Supported 00:07:14.350 Deallocated Guard Field: 0xFFFF 00:07:14.350 Flush: Supported 00:07:14.350 Reservation: Not Supported 00:07:14.350 Namespace Sharing Capabilities: Private 00:07:14.350 Size (in LBAs): 1048576 (4GiB) 00:07:14.350 Capacity (in LBAs): 1048576 (4GiB) 00:07:14.350 Utilization (in LBAs): 1048576 (4GiB) 00:07:14.350 Thin Provisioning: Not Supported 00:07:14.350 Per-NS Atomic Units: No 00:07:14.350 Maximum Single Source Range Length: 128 00:07:14.350 Maximum Copy Length: 128 00:07:14.350 Maximum Source Range Count: 128 00:07:14.350 NGUID/EUI64 Never Reused: No 00:07:14.350 Namespace Write Protected: No 00:07:14.350 Number of LBA Formats: 8 00:07:14.350 Current LBA Format: LBA Format #04 00:07:14.350 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:14.350 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:14.350 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:14.350 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:14.350 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:14.350 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:14.350 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:14.350 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:14.350 00:07:14.350 NVM Specific Namespace Data 00:07:14.350 =========================== 00:07:14.350 Logical Block Storage Tag Mask: 0 00:07:14.350 Protection Information Capabilities: 00:07:14.350 16b Guard Protection Information Storage Tag Support: No 00:07:14.350 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:14.350 Storage Tag Check Read Support: No 00:07:14.350 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Namespace ID:3 00:07:14.350 Error Recovery Timeout: Unlimited 00:07:14.350 Command Set Identifier: NVM (00h) 00:07:14.350 Deallocate: Supported 00:07:14.350 Deallocated/Unwritten Error: Supported 00:07:14.350 Deallocated Read Value: All 0x00 00:07:14.350 Deallocate in Write Zeroes: Not Supported 00:07:14.350 Deallocated Guard Field: 0xFFFF 00:07:14.350 Flush: Supported 00:07:14.350 Reservation: Not Supported 00:07:14.350 Namespace Sharing Capabilities: Private 00:07:14.350 Size (in LBAs): 1048576 (4GiB) 00:07:14.350 Capacity (in LBAs): 1048576 (4GiB) 00:07:14.350 Utilization (in LBAs): 1048576 (4GiB) 00:07:14.350 Thin Provisioning: Not Supported 00:07:14.350 Per-NS Atomic Units: No 00:07:14.350 Maximum Single Source Range Length: 128 00:07:14.350 Maximum Copy Length: 128 00:07:14.350 Maximum Source Range Count: 128 00:07:14.350 NGUID/EUI64 Never Reused: No 00:07:14.350 Namespace Write Protected: No 00:07:14.350 Number of LBA Formats: 8 00:07:14.350 Current LBA Format: LBA Format #04 00:07:14.350 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:14.350 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:14.350 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:14.350 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:14.350 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:14.350 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:14.350 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:14.350 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:14.350 00:07:14.350 NVM Specific Namespace Data 00:07:14.350 =========================== 00:07:14.350 Logical Block Storage Tag Mask: 0 00:07:14.350 Protection Information Capabilities: 00:07:14.350 16b Guard Protection Information Storage Tag Support: No 00:07:14.350 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:14.350 Storage Tag Check Read Support: No 00:07:14.350 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.350 02:54:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:14.350 02:54:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:14.608 ===================================================== 00:07:14.608 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:14.608 ===================================================== 00:07:14.608 Controller Capabilities/Features 00:07:14.608 ================================ 00:07:14.608 Vendor ID: 1b36 00:07:14.608 Subsystem Vendor ID: 1af4 00:07:14.608 Serial Number: 12340 00:07:14.608 Model Number: QEMU NVMe Ctrl 00:07:14.608 Firmware Version: 8.0.0 00:07:14.608 Recommended Arb Burst: 6 00:07:14.608 IEEE OUI Identifier: 00 54 52 00:07:14.608 Multi-path I/O 00:07:14.608 May have multiple subsystem ports: No 00:07:14.608 May have multiple controllers: No 00:07:14.608 Associated with SR-IOV VF: No 00:07:14.608 Max Data Transfer Size: 524288 00:07:14.608 Max Number of Namespaces: 256 00:07:14.608 Max Number of I/O Queues: 64 00:07:14.608 NVMe Specification Version (VS): 1.4 00:07:14.608 NVMe Specification Version (Identify): 1.4 00:07:14.608 Maximum Queue Entries: 2048 00:07:14.608 Contiguous Queues Required: Yes 00:07:14.608 Arbitration Mechanisms Supported 00:07:14.608 Weighted Round Robin: Not Supported 00:07:14.608 Vendor Specific: Not Supported 00:07:14.608 Reset Timeout: 7500 ms 00:07:14.608 Doorbell Stride: 4 bytes 00:07:14.608 NVM Subsystem Reset: Not Supported 00:07:14.608 Command Sets Supported 00:07:14.608 NVM Command Set: Supported 00:07:14.608 Boot Partition: Not Supported 00:07:14.608 Memory Page Size Minimum: 4096 bytes 00:07:14.608 Memory Page Size Maximum: 65536 bytes 00:07:14.608 Persistent Memory Region: Not Supported 00:07:14.608 Optional Asynchronous Events Supported 00:07:14.608 Namespace Attribute Notices: Supported 00:07:14.608 Firmware Activation Notices: Not Supported 00:07:14.608 ANA Change Notices: Not Supported 00:07:14.608 PLE Aggregate Log Change Notices: Not Supported 00:07:14.608 LBA Status Info Alert Notices: Not Supported 00:07:14.608 EGE Aggregate Log Change Notices: Not Supported 00:07:14.608 Normal NVM Subsystem Shutdown event: Not Supported 00:07:14.608 Zone Descriptor Change Notices: Not Supported 00:07:14.608 Discovery Log Change Notices: Not Supported 00:07:14.608 Controller Attributes 00:07:14.608 128-bit Host Identifier: Not Supported 00:07:14.608 Non-Operational Permissive Mode: Not Supported 00:07:14.608 NVM Sets: Not Supported 00:07:14.608 Read Recovery Levels: Not Supported 00:07:14.608 Endurance Groups: Not Supported 00:07:14.608 Predictable Latency Mode: Not Supported 00:07:14.608 Traffic Based Keep ALive: Not Supported 00:07:14.608 Namespace Granularity: Not Supported 00:07:14.608 SQ Associations: Not Supported 00:07:14.608 UUID List: Not Supported 00:07:14.608 Multi-Domain Subsystem: Not Supported 00:07:14.608 Fixed Capacity Management: Not Supported 00:07:14.608 Variable Capacity Management: Not Supported 00:07:14.608 Delete Endurance Group: Not Supported 00:07:14.608 Delete NVM Set: Not Supported 00:07:14.608 Extended LBA Formats Supported: Supported 00:07:14.608 Flexible Data Placement Supported: Not Supported 00:07:14.608 00:07:14.608 Controller Memory Buffer Support 00:07:14.608 ================================ 00:07:14.608 Supported: No 00:07:14.608 00:07:14.608 Persistent Memory Region Support 00:07:14.608 ================================ 00:07:14.608 Supported: No 00:07:14.608 00:07:14.608 Admin Command Set Attributes 00:07:14.608 ============================ 00:07:14.608 Security Send/Receive: Not Supported 00:07:14.608 Format NVM: Supported 00:07:14.608 Firmware Activate/Download: Not Supported 00:07:14.608 Namespace Management: Supported 00:07:14.608 Device Self-Test: Not Supported 00:07:14.608 Directives: Supported 00:07:14.608 NVMe-MI: Not Supported 00:07:14.608 Virtualization Management: Not Supported 00:07:14.608 Doorbell Buffer Config: Supported 00:07:14.608 Get LBA Status Capability: Not Supported 00:07:14.608 Command & Feature Lockdown Capability: Not Supported 00:07:14.608 Abort Command Limit: 4 00:07:14.608 Async Event Request Limit: 4 00:07:14.608 Number of Firmware Slots: N/A 00:07:14.608 Firmware Slot 1 Read-Only: N/A 00:07:14.608 Firmware Activation Without Reset: N/A 00:07:14.608 Multiple Update Detection Support: N/A 00:07:14.609 Firmware Update Granularity: No Information Provided 00:07:14.609 Per-Namespace SMART Log: Yes 00:07:14.609 Asymmetric Namespace Access Log Page: Not Supported 00:07:14.609 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:14.609 Command Effects Log Page: Supported 00:07:14.609 Get Log Page Extended Data: Supported 00:07:14.609 Telemetry Log Pages: Not Supported 00:07:14.609 Persistent Event Log Pages: Not Supported 00:07:14.609 Supported Log Pages Log Page: May Support 00:07:14.609 Commands Supported & Effects Log Page: Not Supported 00:07:14.609 Feature Identifiers & Effects Log Page:May Support 00:07:14.609 NVMe-MI Commands & Effects Log Page: May Support 00:07:14.609 Data Area 4 for Telemetry Log: Not Supported 00:07:14.609 Error Log Page Entries Supported: 1 00:07:14.609 Keep Alive: Not Supported 00:07:14.609 00:07:14.609 NVM Command Set Attributes 00:07:14.609 ========================== 00:07:14.609 Submission Queue Entry Size 00:07:14.609 Max: 64 00:07:14.609 Min: 64 00:07:14.609 Completion Queue Entry Size 00:07:14.609 Max: 16 00:07:14.609 Min: 16 00:07:14.609 Number of Namespaces: 256 00:07:14.609 Compare Command: Supported 00:07:14.609 Write Uncorrectable Command: Not Supported 00:07:14.609 Dataset Management Command: Supported 00:07:14.609 Write Zeroes Command: Supported 00:07:14.609 Set Features Save Field: Supported 00:07:14.609 Reservations: Not Supported 00:07:14.609 Timestamp: Supported 00:07:14.609 Copy: Supported 00:07:14.609 Volatile Write Cache: Present 00:07:14.609 Atomic Write Unit (Normal): 1 00:07:14.609 Atomic Write Unit (PFail): 1 00:07:14.609 Atomic Compare & Write Unit: 1 00:07:14.609 Fused Compare & Write: Not Supported 00:07:14.609 Scatter-Gather List 00:07:14.609 SGL Command Set: Supported 00:07:14.609 SGL Keyed: Not Supported 00:07:14.609 SGL Bit Bucket Descriptor: Not Supported 00:07:14.609 SGL Metadata Pointer: Not Supported 00:07:14.609 Oversized SGL: Not Supported 00:07:14.609 SGL Metadata Address: Not Supported 00:07:14.609 SGL Offset: Not Supported 00:07:14.609 Transport SGL Data Block: Not Supported 00:07:14.609 Replay Protected Memory Block: Not Supported 00:07:14.609 00:07:14.609 Firmware Slot Information 00:07:14.609 ========================= 00:07:14.609 Active slot: 1 00:07:14.609 Slot 1 Firmware Revision: 1.0 00:07:14.609 00:07:14.609 00:07:14.609 Commands Supported and Effects 00:07:14.609 ============================== 00:07:14.609 Admin Commands 00:07:14.609 -------------- 00:07:14.609 Delete I/O Submission Queue (00h): Supported 00:07:14.609 Create I/O Submission Queue (01h): Supported 00:07:14.609 Get Log Page (02h): Supported 00:07:14.609 Delete I/O Completion Queue (04h): Supported 00:07:14.609 Create I/O Completion Queue (05h): Supported 00:07:14.609 Identify (06h): Supported 00:07:14.609 Abort (08h): Supported 00:07:14.609 Set Features (09h): Supported 00:07:14.609 Get Features (0Ah): Supported 00:07:14.609 Asynchronous Event Request (0Ch): Supported 00:07:14.609 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:14.609 Directive Send (19h): Supported 00:07:14.609 Directive Receive (1Ah): Supported 00:07:14.609 Virtualization Management (1Ch): Supported 00:07:14.609 Doorbell Buffer Config (7Ch): Supported 00:07:14.609 Format NVM (80h): Supported LBA-Change 00:07:14.609 I/O Commands 00:07:14.609 ------------ 00:07:14.609 Flush (00h): Supported LBA-Change 00:07:14.609 Write (01h): Supported LBA-Change 00:07:14.609 Read (02h): Supported 00:07:14.609 Compare (05h): Supported 00:07:14.609 Write Zeroes (08h): Supported LBA-Change 00:07:14.609 Dataset Management (09h): Supported LBA-Change 00:07:14.609 Unknown (0Ch): Supported 00:07:14.609 Unknown (12h): Supported 00:07:14.609 Copy (19h): Supported LBA-Change 00:07:14.609 Unknown (1Dh): Supported LBA-Change 00:07:14.609 00:07:14.609 Error Log 00:07:14.609 ========= 00:07:14.609 00:07:14.609 Arbitration 00:07:14.609 =========== 00:07:14.609 Arbitration Burst: no limit 00:07:14.609 00:07:14.609 Power Management 00:07:14.609 ================ 00:07:14.609 Number of Power States: 1 00:07:14.609 Current Power State: Power State #0 00:07:14.609 Power State #0: 00:07:14.609 Max Power: 25.00 W 00:07:14.609 Non-Operational State: Operational 00:07:14.609 Entry Latency: 16 microseconds 00:07:14.609 Exit Latency: 4 microseconds 00:07:14.609 Relative Read Throughput: 0 00:07:14.609 Relative Read Latency: 0 00:07:14.609 Relative Write Throughput: 0 00:07:14.609 Relative Write Latency: 0 00:07:14.609 Idle Power: Not Reported 00:07:14.609 Active Power: Not Reported 00:07:14.609 Non-Operational Permissive Mode: Not Supported 00:07:14.609 00:07:14.609 Health Information 00:07:14.609 ================== 00:07:14.609 Critical Warnings: 00:07:14.609 Available Spare Space: OK 00:07:14.609 Temperature: OK 00:07:14.609 Device Reliability: OK 00:07:14.609 Read Only: No 00:07:14.609 Volatile Memory Backup: OK 00:07:14.609 Current Temperature: 323 Kelvin (50 Celsius) 00:07:14.609 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:14.609 Available Spare: 0% 00:07:14.609 Available Spare Threshold: 0% 00:07:14.609 Life Percentage Used: 0% 00:07:14.609 Data Units Read: 649 00:07:14.609 Data Units Written: 577 00:07:14.609 Host Read Commands: 35210 00:07:14.609 Host Write Commands: 34996 00:07:14.609 Controller Busy Time: 0 minutes 00:07:14.609 Power Cycles: 0 00:07:14.609 Power On Hours: 0 hours 00:07:14.609 Unsafe Shutdowns: 0 00:07:14.609 Unrecoverable Media Errors: 0 00:07:14.609 Lifetime Error Log Entries: 0 00:07:14.609 Warning Temperature Time: 0 minutes 00:07:14.609 Critical Temperature Time: 0 minutes 00:07:14.609 00:07:14.609 Number of Queues 00:07:14.609 ================ 00:07:14.609 Number of I/O Submission Queues: 64 00:07:14.609 Number of I/O Completion Queues: 64 00:07:14.609 00:07:14.609 ZNS Specific Controller Data 00:07:14.609 ============================ 00:07:14.609 Zone Append Size Limit: 0 00:07:14.609 00:07:14.609 00:07:14.609 Active Namespaces 00:07:14.609 ================= 00:07:14.609 Namespace ID:1 00:07:14.609 Error Recovery Timeout: Unlimited 00:07:14.609 Command Set Identifier: NVM (00h) 00:07:14.609 Deallocate: Supported 00:07:14.609 Deallocated/Unwritten Error: Supported 00:07:14.609 Deallocated Read Value: All 0x00 00:07:14.609 Deallocate in Write Zeroes: Not Supported 00:07:14.609 Deallocated Guard Field: 0xFFFF 00:07:14.609 Flush: Supported 00:07:14.609 Reservation: Not Supported 00:07:14.609 Metadata Transferred as: Separate Metadata Buffer 00:07:14.609 Namespace Sharing Capabilities: Private 00:07:14.609 Size (in LBAs): 1548666 (5GiB) 00:07:14.609 Capacity (in LBAs): 1548666 (5GiB) 00:07:14.609 Utilization (in LBAs): 1548666 (5GiB) 00:07:14.609 Thin Provisioning: Not Supported 00:07:14.609 Per-NS Atomic Units: No 00:07:14.609 Maximum Single Source Range Length: 128 00:07:14.609 Maximum Copy Length: 128 00:07:14.609 Maximum Source Range Count: 128 00:07:14.609 NGUID/EUI64 Never Reused: No 00:07:14.609 Namespace Write Protected: No 00:07:14.609 Number of LBA Formats: 8 00:07:14.609 Current LBA Format: LBA Format #07 00:07:14.609 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:14.609 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:14.609 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:14.609 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:14.609 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:14.609 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:14.609 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:14.609 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:14.609 00:07:14.609 NVM Specific Namespace Data 00:07:14.609 =========================== 00:07:14.609 Logical Block Storage Tag Mask: 0 00:07:14.609 Protection Information Capabilities: 00:07:14.609 16b Guard Protection Information Storage Tag Support: No 00:07:14.609 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:14.609 Storage Tag Check Read Support: No 00:07:14.609 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.609 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.609 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.609 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.609 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.609 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.609 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.610 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.610 02:54:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:14.610 02:54:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:14.868 ===================================================== 00:07:14.868 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:14.868 ===================================================== 00:07:14.868 Controller Capabilities/Features 00:07:14.868 ================================ 00:07:14.868 Vendor ID: 1b36 00:07:14.868 Subsystem Vendor ID: 1af4 00:07:14.868 Serial Number: 12341 00:07:14.868 Model Number: QEMU NVMe Ctrl 00:07:14.868 Firmware Version: 8.0.0 00:07:14.868 Recommended Arb Burst: 6 00:07:14.868 IEEE OUI Identifier: 00 54 52 00:07:14.868 Multi-path I/O 00:07:14.868 May have multiple subsystem ports: No 00:07:14.868 May have multiple controllers: No 00:07:14.869 Associated with SR-IOV VF: No 00:07:14.869 Max Data Transfer Size: 524288 00:07:14.869 Max Number of Namespaces: 256 00:07:14.869 Max Number of I/O Queues: 64 00:07:14.869 NVMe Specification Version (VS): 1.4 00:07:14.869 NVMe Specification Version (Identify): 1.4 00:07:14.869 Maximum Queue Entries: 2048 00:07:14.869 Contiguous Queues Required: Yes 00:07:14.869 Arbitration Mechanisms Supported 00:07:14.869 Weighted Round Robin: Not Supported 00:07:14.869 Vendor Specific: Not Supported 00:07:14.869 Reset Timeout: 7500 ms 00:07:14.869 Doorbell Stride: 4 bytes 00:07:14.869 NVM Subsystem Reset: Not Supported 00:07:14.869 Command Sets Supported 00:07:14.869 NVM Command Set: Supported 00:07:14.869 Boot Partition: Not Supported 00:07:14.869 Memory Page Size Minimum: 4096 bytes 00:07:14.869 Memory Page Size Maximum: 65536 bytes 00:07:14.869 Persistent Memory Region: Not Supported 00:07:14.869 Optional Asynchronous Events Supported 00:07:14.869 Namespace Attribute Notices: Supported 00:07:14.869 Firmware Activation Notices: Not Supported 00:07:14.869 ANA Change Notices: Not Supported 00:07:14.869 PLE Aggregate Log Change Notices: Not Supported 00:07:14.869 LBA Status Info Alert Notices: Not Supported 00:07:14.869 EGE Aggregate Log Change Notices: Not Supported 00:07:14.869 Normal NVM Subsystem Shutdown event: Not Supported 00:07:14.869 Zone Descriptor Change Notices: Not Supported 00:07:14.869 Discovery Log Change Notices: Not Supported 00:07:14.869 Controller Attributes 00:07:14.869 128-bit Host Identifier: Not Supported 00:07:14.869 Non-Operational Permissive Mode: Not Supported 00:07:14.869 NVM Sets: Not Supported 00:07:14.869 Read Recovery Levels: Not Supported 00:07:14.869 Endurance Groups: Not Supported 00:07:14.869 Predictable Latency Mode: Not Supported 00:07:14.869 Traffic Based Keep ALive: Not Supported 00:07:14.869 Namespace Granularity: Not Supported 00:07:14.869 SQ Associations: Not Supported 00:07:14.869 UUID List: Not Supported 00:07:14.869 Multi-Domain Subsystem: Not Supported 00:07:14.869 Fixed Capacity Management: Not Supported 00:07:14.869 Variable Capacity Management: Not Supported 00:07:14.869 Delete Endurance Group: Not Supported 00:07:14.869 Delete NVM Set: Not Supported 00:07:14.869 Extended LBA Formats Supported: Supported 00:07:14.869 Flexible Data Placement Supported: Not Supported 00:07:14.869 00:07:14.869 Controller Memory Buffer Support 00:07:14.869 ================================ 00:07:14.869 Supported: No 00:07:14.869 00:07:14.869 Persistent Memory Region Support 00:07:14.869 ================================ 00:07:14.869 Supported: No 00:07:14.869 00:07:14.869 Admin Command Set Attributes 00:07:14.869 ============================ 00:07:14.869 Security Send/Receive: Not Supported 00:07:14.869 Format NVM: Supported 00:07:14.869 Firmware Activate/Download: Not Supported 00:07:14.869 Namespace Management: Supported 00:07:14.869 Device Self-Test: Not Supported 00:07:14.869 Directives: Supported 00:07:14.869 NVMe-MI: Not Supported 00:07:14.869 Virtualization Management: Not Supported 00:07:14.869 Doorbell Buffer Config: Supported 00:07:14.869 Get LBA Status Capability: Not Supported 00:07:14.869 Command & Feature Lockdown Capability: Not Supported 00:07:14.869 Abort Command Limit: 4 00:07:14.869 Async Event Request Limit: 4 00:07:14.869 Number of Firmware Slots: N/A 00:07:14.869 Firmware Slot 1 Read-Only: N/A 00:07:14.869 Firmware Activation Without Reset: N/A 00:07:14.869 Multiple Update Detection Support: N/A 00:07:14.869 Firmware Update Granularity: No Information Provided 00:07:14.869 Per-Namespace SMART Log: Yes 00:07:14.869 Asymmetric Namespace Access Log Page: Not Supported 00:07:14.869 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:14.869 Command Effects Log Page: Supported 00:07:14.869 Get Log Page Extended Data: Supported 00:07:14.869 Telemetry Log Pages: Not Supported 00:07:14.869 Persistent Event Log Pages: Not Supported 00:07:14.869 Supported Log Pages Log Page: May Support 00:07:14.869 Commands Supported & Effects Log Page: Not Supported 00:07:14.869 Feature Identifiers & Effects Log Page:May Support 00:07:14.869 NVMe-MI Commands & Effects Log Page: May Support 00:07:14.869 Data Area 4 for Telemetry Log: Not Supported 00:07:14.869 Error Log Page Entries Supported: 1 00:07:14.869 Keep Alive: Not Supported 00:07:14.869 00:07:14.869 NVM Command Set Attributes 00:07:14.869 ========================== 00:07:14.869 Submission Queue Entry Size 00:07:14.869 Max: 64 00:07:14.869 Min: 64 00:07:14.869 Completion Queue Entry Size 00:07:14.869 Max: 16 00:07:14.869 Min: 16 00:07:14.869 Number of Namespaces: 256 00:07:14.869 Compare Command: Supported 00:07:14.869 Write Uncorrectable Command: Not Supported 00:07:14.869 Dataset Management Command: Supported 00:07:14.869 Write Zeroes Command: Supported 00:07:14.869 Set Features Save Field: Supported 00:07:14.869 Reservations: Not Supported 00:07:14.869 Timestamp: Supported 00:07:14.869 Copy: Supported 00:07:14.869 Volatile Write Cache: Present 00:07:14.869 Atomic Write Unit (Normal): 1 00:07:14.869 Atomic Write Unit (PFail): 1 00:07:14.869 Atomic Compare & Write Unit: 1 00:07:14.869 Fused Compare & Write: Not Supported 00:07:14.869 Scatter-Gather List 00:07:14.869 SGL Command Set: Supported 00:07:14.869 SGL Keyed: Not Supported 00:07:14.869 SGL Bit Bucket Descriptor: Not Supported 00:07:14.869 SGL Metadata Pointer: Not Supported 00:07:14.869 Oversized SGL: Not Supported 00:07:14.869 SGL Metadata Address: Not Supported 00:07:14.869 SGL Offset: Not Supported 00:07:14.869 Transport SGL Data Block: Not Supported 00:07:14.869 Replay Protected Memory Block: Not Supported 00:07:14.869 00:07:14.869 Firmware Slot Information 00:07:14.869 ========================= 00:07:14.869 Active slot: 1 00:07:14.869 Slot 1 Firmware Revision: 1.0 00:07:14.869 00:07:14.869 00:07:14.869 Commands Supported and Effects 00:07:14.869 ============================== 00:07:14.869 Admin Commands 00:07:14.869 -------------- 00:07:14.869 Delete I/O Submission Queue (00h): Supported 00:07:14.869 Create I/O Submission Queue (01h): Supported 00:07:14.869 Get Log Page (02h): Supported 00:07:14.869 Delete I/O Completion Queue (04h): Supported 00:07:14.869 Create I/O Completion Queue (05h): Supported 00:07:14.869 Identify (06h): Supported 00:07:14.869 Abort (08h): Supported 00:07:14.869 Set Features (09h): Supported 00:07:14.869 Get Features (0Ah): Supported 00:07:14.869 Asynchronous Event Request (0Ch): Supported 00:07:14.869 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:14.869 Directive Send (19h): Supported 00:07:14.869 Directive Receive (1Ah): Supported 00:07:14.869 Virtualization Management (1Ch): Supported 00:07:14.869 Doorbell Buffer Config (7Ch): Supported 00:07:14.869 Format NVM (80h): Supported LBA-Change 00:07:14.869 I/O Commands 00:07:14.869 ------------ 00:07:14.869 Flush (00h): Supported LBA-Change 00:07:14.869 Write (01h): Supported LBA-Change 00:07:14.869 Read (02h): Supported 00:07:14.869 Compare (05h): Supported 00:07:14.869 Write Zeroes (08h): Supported LBA-Change 00:07:14.869 Dataset Management (09h): Supported LBA-Change 00:07:14.869 Unknown (0Ch): Supported 00:07:14.869 Unknown (12h): Supported 00:07:14.869 Copy (19h): Supported LBA-Change 00:07:14.869 Unknown (1Dh): Supported LBA-Change 00:07:14.869 00:07:14.869 Error Log 00:07:14.869 ========= 00:07:14.869 00:07:14.869 Arbitration 00:07:14.869 =========== 00:07:14.869 Arbitration Burst: no limit 00:07:14.869 00:07:14.869 Power Management 00:07:14.870 ================ 00:07:14.870 Number of Power States: 1 00:07:14.870 Current Power State: Power State #0 00:07:14.870 Power State #0: 00:07:14.870 Max Power: 25.00 W 00:07:14.870 Non-Operational State: Operational 00:07:14.870 Entry Latency: 16 microseconds 00:07:14.870 Exit Latency: 4 microseconds 00:07:14.870 Relative Read Throughput: 0 00:07:14.870 Relative Read Latency: 0 00:07:14.870 Relative Write Throughput: 0 00:07:14.870 Relative Write Latency: 0 00:07:14.870 Idle Power: Not Reported 00:07:14.870 Active Power: Not Reported 00:07:14.870 Non-Operational Permissive Mode: Not Supported 00:07:14.870 00:07:14.870 Health Information 00:07:14.870 ================== 00:07:14.870 Critical Warnings: 00:07:14.870 Available Spare Space: OK 00:07:14.870 Temperature: OK 00:07:14.870 Device Reliability: OK 00:07:14.870 Read Only: No 00:07:14.870 Volatile Memory Backup: OK 00:07:14.870 Current Temperature: 323 Kelvin (50 Celsius) 00:07:14.870 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:14.870 Available Spare: 0% 00:07:14.870 Available Spare Threshold: 0% 00:07:14.870 Life Percentage Used: 0% 00:07:14.870 Data Units Read: 963 00:07:14.870 Data Units Written: 836 00:07:14.870 Host Read Commands: 51442 00:07:14.870 Host Write Commands: 50346 00:07:14.870 Controller Busy Time: 0 minutes 00:07:14.870 Power Cycles: 0 00:07:14.870 Power On Hours: 0 hours 00:07:14.870 Unsafe Shutdowns: 0 00:07:14.870 Unrecoverable Media Errors: 0 00:07:14.870 Lifetime Error Log Entries: 0 00:07:14.870 Warning Temperature Time: 0 minutes 00:07:14.870 Critical Temperature Time: 0 minutes 00:07:14.870 00:07:14.870 Number of Queues 00:07:14.870 ================ 00:07:14.870 Number of I/O Submission Queues: 64 00:07:14.870 Number of I/O Completion Queues: 64 00:07:14.870 00:07:14.870 ZNS Specific Controller Data 00:07:14.870 ============================ 00:07:14.870 Zone Append Size Limit: 0 00:07:14.870 00:07:14.870 00:07:14.870 Active Namespaces 00:07:14.870 ================= 00:07:14.870 Namespace ID:1 00:07:14.870 Error Recovery Timeout: Unlimited 00:07:14.870 Command Set Identifier: NVM (00h) 00:07:14.870 Deallocate: Supported 00:07:14.870 Deallocated/Unwritten Error: Supported 00:07:14.870 Deallocated Read Value: All 0x00 00:07:14.870 Deallocate in Write Zeroes: Not Supported 00:07:14.870 Deallocated Guard Field: 0xFFFF 00:07:14.870 Flush: Supported 00:07:14.870 Reservation: Not Supported 00:07:14.870 Namespace Sharing Capabilities: Private 00:07:14.870 Size (in LBAs): 1310720 (5GiB) 00:07:14.870 Capacity (in LBAs): 1310720 (5GiB) 00:07:14.870 Utilization (in LBAs): 1310720 (5GiB) 00:07:14.870 Thin Provisioning: Not Supported 00:07:14.870 Per-NS Atomic Units: No 00:07:14.870 Maximum Single Source Range Length: 128 00:07:14.870 Maximum Copy Length: 128 00:07:14.870 Maximum Source Range Count: 128 00:07:14.870 NGUID/EUI64 Never Reused: No 00:07:14.870 Namespace Write Protected: No 00:07:14.870 Number of LBA Formats: 8 00:07:14.870 Current LBA Format: LBA Format #04 00:07:14.870 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:14.870 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:14.870 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:14.870 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:14.870 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:14.870 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:14.870 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:14.870 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:14.870 00:07:14.870 NVM Specific Namespace Data 00:07:14.870 =========================== 00:07:14.870 Logical Block Storage Tag Mask: 0 00:07:14.870 Protection Information Capabilities: 00:07:14.870 16b Guard Protection Information Storage Tag Support: No 00:07:14.870 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:14.870 Storage Tag Check Read Support: No 00:07:14.870 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.870 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.870 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.870 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.870 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.870 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.870 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.870 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.870 02:54:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:14.870 02:54:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:14.870 ===================================================== 00:07:14.870 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:14.870 ===================================================== 00:07:14.870 Controller Capabilities/Features 00:07:14.870 ================================ 00:07:14.870 Vendor ID: 1b36 00:07:14.870 Subsystem Vendor ID: 1af4 00:07:14.870 Serial Number: 12342 00:07:14.870 Model Number: QEMU NVMe Ctrl 00:07:14.870 Firmware Version: 8.0.0 00:07:14.870 Recommended Arb Burst: 6 00:07:14.870 IEEE OUI Identifier: 00 54 52 00:07:14.870 Multi-path I/O 00:07:14.870 May have multiple subsystem ports: No 00:07:14.870 May have multiple controllers: No 00:07:14.870 Associated with SR-IOV VF: No 00:07:14.870 Max Data Transfer Size: 524288 00:07:14.870 Max Number of Namespaces: 256 00:07:14.870 Max Number of I/O Queues: 64 00:07:14.870 NVMe Specification Version (VS): 1.4 00:07:14.870 NVMe Specification Version (Identify): 1.4 00:07:14.870 Maximum Queue Entries: 2048 00:07:14.870 Contiguous Queues Required: Yes 00:07:14.870 Arbitration Mechanisms Supported 00:07:14.870 Weighted Round Robin: Not Supported 00:07:14.870 Vendor Specific: Not Supported 00:07:14.870 Reset Timeout: 7500 ms 00:07:14.870 Doorbell Stride: 4 bytes 00:07:14.870 NVM Subsystem Reset: Not Supported 00:07:14.870 Command Sets Supported 00:07:14.870 NVM Command Set: Supported 00:07:14.870 Boot Partition: Not Supported 00:07:14.870 Memory Page Size Minimum: 4096 bytes 00:07:14.870 Memory Page Size Maximum: 65536 bytes 00:07:14.870 Persistent Memory Region: Not Supported 00:07:14.870 Optional Asynchronous Events Supported 00:07:14.870 Namespace Attribute Notices: Supported 00:07:14.870 Firmware Activation Notices: Not Supported 00:07:14.870 ANA Change Notices: Not Supported 00:07:14.870 PLE Aggregate Log Change Notices: Not Supported 00:07:14.870 LBA Status Info Alert Notices: Not Supported 00:07:14.870 EGE Aggregate Log Change Notices: Not Supported 00:07:14.870 Normal NVM Subsystem Shutdown event: Not Supported 00:07:14.870 Zone Descriptor Change Notices: Not Supported 00:07:14.870 Discovery Log Change Notices: Not Supported 00:07:14.870 Controller Attributes 00:07:14.870 128-bit Host Identifier: Not Supported 00:07:14.870 Non-Operational Permissive Mode: Not Supported 00:07:14.870 NVM Sets: Not Supported 00:07:14.870 Read Recovery Levels: Not Supported 00:07:14.870 Endurance Groups: Not Supported 00:07:14.870 Predictable Latency Mode: Not Supported 00:07:14.870 Traffic Based Keep ALive: Not Supported 00:07:14.870 Namespace Granularity: Not Supported 00:07:14.870 SQ Associations: Not Supported 00:07:14.870 UUID List: Not Supported 00:07:14.870 Multi-Domain Subsystem: Not Supported 00:07:14.870 Fixed Capacity Management: Not Supported 00:07:14.870 Variable Capacity Management: Not Supported 00:07:14.870 Delete Endurance Group: Not Supported 00:07:14.870 Delete NVM Set: Not Supported 00:07:14.870 Extended LBA Formats Supported: Supported 00:07:14.870 Flexible Data Placement Supported: Not Supported 00:07:14.870 00:07:14.870 Controller Memory Buffer Support 00:07:14.870 ================================ 00:07:14.870 Supported: No 00:07:14.870 00:07:14.870 Persistent Memory Region Support 00:07:14.870 ================================ 00:07:14.870 Supported: No 00:07:14.870 00:07:14.870 Admin Command Set Attributes 00:07:14.870 ============================ 00:07:14.870 Security Send/Receive: Not Supported 00:07:14.870 Format NVM: Supported 00:07:14.870 Firmware Activate/Download: Not Supported 00:07:14.870 Namespace Management: Supported 00:07:14.870 Device Self-Test: Not Supported 00:07:14.870 Directives: Supported 00:07:14.870 NVMe-MI: Not Supported 00:07:14.870 Virtualization Management: Not Supported 00:07:14.870 Doorbell Buffer Config: Supported 00:07:14.870 Get LBA Status Capability: Not Supported 00:07:14.870 Command & Feature Lockdown Capability: Not Supported 00:07:14.870 Abort Command Limit: 4 00:07:14.870 Async Event Request Limit: 4 00:07:14.870 Number of Firmware Slots: N/A 00:07:14.870 Firmware Slot 1 Read-Only: N/A 00:07:14.871 Firmware Activation Without Reset: N/A 00:07:14.871 Multiple Update Detection Support: N/A 00:07:14.871 Firmware Update Granularity: No Information Provided 00:07:14.871 Per-Namespace SMART Log: Yes 00:07:14.871 Asymmetric Namespace Access Log Page: Not Supported 00:07:14.871 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:14.871 Command Effects Log Page: Supported 00:07:14.871 Get Log Page Extended Data: Supported 00:07:14.871 Telemetry Log Pages: Not Supported 00:07:14.871 Persistent Event Log Pages: Not Supported 00:07:14.871 Supported Log Pages Log Page: May Support 00:07:14.871 Commands Supported & Effects Log Page: Not Supported 00:07:14.871 Feature Identifiers & Effects Log Page:May Support 00:07:14.871 NVMe-MI Commands & Effects Log Page: May Support 00:07:14.871 Data Area 4 for Telemetry Log: Not Supported 00:07:14.871 Error Log Page Entries Supported: 1 00:07:14.871 Keep Alive: Not Supported 00:07:14.871 00:07:14.871 NVM Command Set Attributes 00:07:14.871 ========================== 00:07:14.871 Submission Queue Entry Size 00:07:14.871 Max: 64 00:07:14.871 Min: 64 00:07:14.871 Completion Queue Entry Size 00:07:14.871 Max: 16 00:07:14.871 Min: 16 00:07:14.871 Number of Namespaces: 256 00:07:14.871 Compare Command: Supported 00:07:14.871 Write Uncorrectable Command: Not Supported 00:07:14.871 Dataset Management Command: Supported 00:07:14.871 Write Zeroes Command: Supported 00:07:14.871 Set Features Save Field: Supported 00:07:14.871 Reservations: Not Supported 00:07:14.871 Timestamp: Supported 00:07:14.871 Copy: Supported 00:07:14.871 Volatile Write Cache: Present 00:07:14.871 Atomic Write Unit (Normal): 1 00:07:14.871 Atomic Write Unit (PFail): 1 00:07:14.871 Atomic Compare & Write Unit: 1 00:07:14.871 Fused Compare & Write: Not Supported 00:07:14.871 Scatter-Gather List 00:07:14.871 SGL Command Set: Supported 00:07:14.871 SGL Keyed: Not Supported 00:07:14.871 SGL Bit Bucket Descriptor: Not Supported 00:07:14.871 SGL Metadata Pointer: Not Supported 00:07:14.871 Oversized SGL: Not Supported 00:07:14.871 SGL Metadata Address: Not Supported 00:07:14.871 SGL Offset: Not Supported 00:07:14.871 Transport SGL Data Block: Not Supported 00:07:14.871 Replay Protected Memory Block: Not Supported 00:07:14.871 00:07:14.871 Firmware Slot Information 00:07:14.871 ========================= 00:07:14.871 Active slot: 1 00:07:14.871 Slot 1 Firmware Revision: 1.0 00:07:14.871 00:07:14.871 00:07:14.871 Commands Supported and Effects 00:07:14.871 ============================== 00:07:14.871 Admin Commands 00:07:14.871 -------------- 00:07:14.871 Delete I/O Submission Queue (00h): Supported 00:07:14.871 Create I/O Submission Queue (01h): Supported 00:07:14.871 Get Log Page (02h): Supported 00:07:14.871 Delete I/O Completion Queue (04h): Supported 00:07:14.871 Create I/O Completion Queue (05h): Supported 00:07:14.871 Identify (06h): Supported 00:07:14.871 Abort (08h): Supported 00:07:14.871 Set Features (09h): Supported 00:07:14.871 Get Features (0Ah): Supported 00:07:14.871 Asynchronous Event Request (0Ch): Supported 00:07:14.871 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:14.871 Directive Send (19h): Supported 00:07:14.871 Directive Receive (1Ah): Supported 00:07:14.871 Virtualization Management (1Ch): Supported 00:07:14.871 Doorbell Buffer Config (7Ch): Supported 00:07:14.871 Format NVM (80h): Supported LBA-Change 00:07:14.871 I/O Commands 00:07:14.871 ------------ 00:07:14.871 Flush (00h): Supported LBA-Change 00:07:14.871 Write (01h): Supported LBA-Change 00:07:14.871 Read (02h): Supported 00:07:14.871 Compare (05h): Supported 00:07:14.871 Write Zeroes (08h): Supported LBA-Change 00:07:14.871 Dataset Management (09h): Supported LBA-Change 00:07:14.871 Unknown (0Ch): Supported 00:07:14.871 Unknown (12h): Supported 00:07:14.871 Copy (19h): Supported LBA-Change 00:07:14.871 Unknown (1Dh): Supported LBA-Change 00:07:14.871 00:07:14.871 Error Log 00:07:14.871 ========= 00:07:14.871 00:07:14.871 Arbitration 00:07:14.871 =========== 00:07:14.871 Arbitration Burst: no limit 00:07:14.871 00:07:14.871 Power Management 00:07:14.871 ================ 00:07:14.871 Number of Power States: 1 00:07:14.871 Current Power State: Power State #0 00:07:14.871 Power State #0: 00:07:14.871 Max Power: 25.00 W 00:07:14.871 Non-Operational State: Operational 00:07:14.871 Entry Latency: 16 microseconds 00:07:14.871 Exit Latency: 4 microseconds 00:07:14.871 Relative Read Throughput: 0 00:07:14.871 Relative Read Latency: 0 00:07:14.871 Relative Write Throughput: 0 00:07:14.871 Relative Write Latency: 0 00:07:14.871 Idle Power: Not Reported 00:07:14.871 Active Power: Not Reported 00:07:14.871 Non-Operational Permissive Mode: Not Supported 00:07:14.871 00:07:14.871 Health Information 00:07:14.871 ================== 00:07:14.871 Critical Warnings: 00:07:14.871 Available Spare Space: OK 00:07:14.871 Temperature: OK 00:07:14.871 Device Reliability: OK 00:07:14.871 Read Only: No 00:07:14.871 Volatile Memory Backup: OK 00:07:14.871 Current Temperature: 323 Kelvin (50 Celsius) 00:07:14.871 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:14.871 Available Spare: 0% 00:07:14.871 Available Spare Threshold: 0% 00:07:14.871 Life Percentage Used: 0% 00:07:14.871 Data Units Read: 2071 00:07:14.871 Data Units Written: 1858 00:07:14.871 Host Read Commands: 108158 00:07:14.871 Host Write Commands: 106428 00:07:14.871 Controller Busy Time: 0 minutes 00:07:14.871 Power Cycles: 0 00:07:14.871 Power On Hours: 0 hours 00:07:14.871 Unsafe Shutdowns: 0 00:07:14.871 Unrecoverable Media Errors: 0 00:07:14.871 Lifetime Error Log Entries: 0 00:07:14.871 Warning Temperature Time: 0 minutes 00:07:14.871 Critical Temperature Time: 0 minutes 00:07:14.871 00:07:14.871 Number of Queues 00:07:14.871 ================ 00:07:14.871 Number of I/O Submission Queues: 64 00:07:14.871 Number of I/O Completion Queues: 64 00:07:14.871 00:07:14.871 ZNS Specific Controller Data 00:07:14.871 ============================ 00:07:14.871 Zone Append Size Limit: 0 00:07:14.871 00:07:14.871 00:07:14.871 Active Namespaces 00:07:14.871 ================= 00:07:14.871 Namespace ID:1 00:07:14.871 Error Recovery Timeout: Unlimited 00:07:14.871 Command Set Identifier: NVM (00h) 00:07:14.871 Deallocate: Supported 00:07:14.871 Deallocated/Unwritten Error: Supported 00:07:14.871 Deallocated Read Value: All 0x00 00:07:14.871 Deallocate in Write Zeroes: Not Supported 00:07:14.871 Deallocated Guard Field: 0xFFFF 00:07:14.871 Flush: Supported 00:07:14.871 Reservation: Not Supported 00:07:14.871 Namespace Sharing Capabilities: Private 00:07:14.871 Size (in LBAs): 1048576 (4GiB) 00:07:14.871 Capacity (in LBAs): 1048576 (4GiB) 00:07:14.871 Utilization (in LBAs): 1048576 (4GiB) 00:07:14.871 Thin Provisioning: Not Supported 00:07:14.871 Per-NS Atomic Units: No 00:07:14.871 Maximum Single Source Range Length: 128 00:07:14.871 Maximum Copy Length: 128 00:07:14.871 Maximum Source Range Count: 128 00:07:14.871 NGUID/EUI64 Never Reused: No 00:07:14.871 Namespace Write Protected: No 00:07:14.871 Number of LBA Formats: 8 00:07:14.871 Current LBA Format: LBA Format #04 00:07:14.871 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:14.871 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:14.871 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:14.871 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:14.871 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:14.871 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:14.871 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:14.871 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:14.871 00:07:14.871 NVM Specific Namespace Data 00:07:14.871 =========================== 00:07:14.871 Logical Block Storage Tag Mask: 0 00:07:14.871 Protection Information Capabilities: 00:07:14.871 16b Guard Protection Information Storage Tag Support: No 00:07:14.872 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:14.872 Storage Tag Check Read Support: No 00:07:14.872 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Namespace ID:2 00:07:14.872 Error Recovery Timeout: Unlimited 00:07:14.872 Command Set Identifier: NVM (00h) 00:07:14.872 Deallocate: Supported 00:07:14.872 Deallocated/Unwritten Error: Supported 00:07:14.872 Deallocated Read Value: All 0x00 00:07:14.872 Deallocate in Write Zeroes: Not Supported 00:07:14.872 Deallocated Guard Field: 0xFFFF 00:07:14.872 Flush: Supported 00:07:14.872 Reservation: Not Supported 00:07:14.872 Namespace Sharing Capabilities: Private 00:07:14.872 Size (in LBAs): 1048576 (4GiB) 00:07:14.872 Capacity (in LBAs): 1048576 (4GiB) 00:07:14.872 Utilization (in LBAs): 1048576 (4GiB) 00:07:14.872 Thin Provisioning: Not Supported 00:07:14.872 Per-NS Atomic Units: No 00:07:14.872 Maximum Single Source Range Length: 128 00:07:14.872 Maximum Copy Length: 128 00:07:14.872 Maximum Source Range Count: 128 00:07:14.872 NGUID/EUI64 Never Reused: No 00:07:14.872 Namespace Write Protected: No 00:07:14.872 Number of LBA Formats: 8 00:07:14.872 Current LBA Format: LBA Format #04 00:07:14.872 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:14.872 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:14.872 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:14.872 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:14.872 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:14.872 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:14.872 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:14.872 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:14.872 00:07:14.872 NVM Specific Namespace Data 00:07:14.872 =========================== 00:07:14.872 Logical Block Storage Tag Mask: 0 00:07:14.872 Protection Information Capabilities: 00:07:14.872 16b Guard Protection Information Storage Tag Support: No 00:07:14.872 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:14.872 Storage Tag Check Read Support: No 00:07:14.872 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Namespace ID:3 00:07:14.872 Error Recovery Timeout: Unlimited 00:07:14.872 Command Set Identifier: NVM (00h) 00:07:14.872 Deallocate: Supported 00:07:14.872 Deallocated/Unwritten Error: Supported 00:07:14.872 Deallocated Read Value: All 0x00 00:07:14.872 Deallocate in Write Zeroes: Not Supported 00:07:14.872 Deallocated Guard Field: 0xFFFF 00:07:14.872 Flush: Supported 00:07:14.872 Reservation: Not Supported 00:07:14.872 Namespace Sharing Capabilities: Private 00:07:14.872 Size (in LBAs): 1048576 (4GiB) 00:07:14.872 Capacity (in LBAs): 1048576 (4GiB) 00:07:14.872 Utilization (in LBAs): 1048576 (4GiB) 00:07:14.872 Thin Provisioning: Not Supported 00:07:14.872 Per-NS Atomic Units: No 00:07:14.872 Maximum Single Source Range Length: 128 00:07:14.872 Maximum Copy Length: 128 00:07:14.872 Maximum Source Range Count: 128 00:07:14.872 NGUID/EUI64 Never Reused: No 00:07:14.872 Namespace Write Protected: No 00:07:14.872 Number of LBA Formats: 8 00:07:14.872 Current LBA Format: LBA Format #04 00:07:14.872 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:14.872 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:14.872 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:14.872 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:14.872 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:14.872 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:14.872 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:14.872 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:14.872 00:07:14.872 NVM Specific Namespace Data 00:07:14.872 =========================== 00:07:14.872 Logical Block Storage Tag Mask: 0 00:07:14.872 Protection Information Capabilities: 00:07:14.872 16b Guard Protection Information Storage Tag Support: No 00:07:14.872 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:14.872 Storage Tag Check Read Support: No 00:07:14.872 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:14.872 02:54:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:14.872 02:54:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:15.133 ===================================================== 00:07:15.133 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:15.133 ===================================================== 00:07:15.133 Controller Capabilities/Features 00:07:15.133 ================================ 00:07:15.133 Vendor ID: 1b36 00:07:15.133 Subsystem Vendor ID: 1af4 00:07:15.133 Serial Number: 12343 00:07:15.133 Model Number: QEMU NVMe Ctrl 00:07:15.133 Firmware Version: 8.0.0 00:07:15.133 Recommended Arb Burst: 6 00:07:15.133 IEEE OUI Identifier: 00 54 52 00:07:15.133 Multi-path I/O 00:07:15.133 May have multiple subsystem ports: No 00:07:15.133 May have multiple controllers: Yes 00:07:15.133 Associated with SR-IOV VF: No 00:07:15.133 Max Data Transfer Size: 524288 00:07:15.133 Max Number of Namespaces: 256 00:07:15.133 Max Number of I/O Queues: 64 00:07:15.133 NVMe Specification Version (VS): 1.4 00:07:15.133 NVMe Specification Version (Identify): 1.4 00:07:15.133 Maximum Queue Entries: 2048 00:07:15.133 Contiguous Queues Required: Yes 00:07:15.133 Arbitration Mechanisms Supported 00:07:15.133 Weighted Round Robin: Not Supported 00:07:15.133 Vendor Specific: Not Supported 00:07:15.133 Reset Timeout: 7500 ms 00:07:15.133 Doorbell Stride: 4 bytes 00:07:15.133 NVM Subsystem Reset: Not Supported 00:07:15.133 Command Sets Supported 00:07:15.133 NVM Command Set: Supported 00:07:15.133 Boot Partition: Not Supported 00:07:15.134 Memory Page Size Minimum: 4096 bytes 00:07:15.134 Memory Page Size Maximum: 65536 bytes 00:07:15.134 Persistent Memory Region: Not Supported 00:07:15.134 Optional Asynchronous Events Supported 00:07:15.134 Namespace Attribute Notices: Supported 00:07:15.134 Firmware Activation Notices: Not Supported 00:07:15.134 ANA Change Notices: Not Supported 00:07:15.134 PLE Aggregate Log Change Notices: Not Supported 00:07:15.134 LBA Status Info Alert Notices: Not Supported 00:07:15.134 EGE Aggregate Log Change Notices: Not Supported 00:07:15.134 Normal NVM Subsystem Shutdown event: Not Supported 00:07:15.134 Zone Descriptor Change Notices: Not Supported 00:07:15.134 Discovery Log Change Notices: Not Supported 00:07:15.134 Controller Attributes 00:07:15.134 128-bit Host Identifier: Not Supported 00:07:15.134 Non-Operational Permissive Mode: Not Supported 00:07:15.134 NVM Sets: Not Supported 00:07:15.134 Read Recovery Levels: Not Supported 00:07:15.134 Endurance Groups: Supported 00:07:15.134 Predictable Latency Mode: Not Supported 00:07:15.134 Traffic Based Keep ALive: Not Supported 00:07:15.134 Namespace Granularity: Not Supported 00:07:15.134 SQ Associations: Not Supported 00:07:15.134 UUID List: Not Supported 00:07:15.134 Multi-Domain Subsystem: Not Supported 00:07:15.134 Fixed Capacity Management: Not Supported 00:07:15.134 Variable Capacity Management: Not Supported 00:07:15.134 Delete Endurance Group: Not Supported 00:07:15.134 Delete NVM Set: Not Supported 00:07:15.134 Extended LBA Formats Supported: Supported 00:07:15.134 Flexible Data Placement Supported: Supported 00:07:15.134 00:07:15.134 Controller Memory Buffer Support 00:07:15.134 ================================ 00:07:15.134 Supported: No 00:07:15.134 00:07:15.134 Persistent Memory Region Support 00:07:15.134 ================================ 00:07:15.134 Supported: No 00:07:15.134 00:07:15.134 Admin Command Set Attributes 00:07:15.134 ============================ 00:07:15.134 Security Send/Receive: Not Supported 00:07:15.134 Format NVM: Supported 00:07:15.134 Firmware Activate/Download: Not Supported 00:07:15.134 Namespace Management: Supported 00:07:15.134 Device Self-Test: Not Supported 00:07:15.134 Directives: Supported 00:07:15.134 NVMe-MI: Not Supported 00:07:15.134 Virtualization Management: Not Supported 00:07:15.134 Doorbell Buffer Config: Supported 00:07:15.134 Get LBA Status Capability: Not Supported 00:07:15.134 Command & Feature Lockdown Capability: Not Supported 00:07:15.134 Abort Command Limit: 4 00:07:15.134 Async Event Request Limit: 4 00:07:15.134 Number of Firmware Slots: N/A 00:07:15.134 Firmware Slot 1 Read-Only: N/A 00:07:15.134 Firmware Activation Without Reset: N/A 00:07:15.134 Multiple Update Detection Support: N/A 00:07:15.134 Firmware Update Granularity: No Information Provided 00:07:15.134 Per-Namespace SMART Log: Yes 00:07:15.134 Asymmetric Namespace Access Log Page: Not Supported 00:07:15.134 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:15.134 Command Effects Log Page: Supported 00:07:15.134 Get Log Page Extended Data: Supported 00:07:15.134 Telemetry Log Pages: Not Supported 00:07:15.134 Persistent Event Log Pages: Not Supported 00:07:15.134 Supported Log Pages Log Page: May Support 00:07:15.134 Commands Supported & Effects Log Page: Not Supported 00:07:15.134 Feature Identifiers & Effects Log Page:May Support 00:07:15.134 NVMe-MI Commands & Effects Log Page: May Support 00:07:15.134 Data Area 4 for Telemetry Log: Not Supported 00:07:15.134 Error Log Page Entries Supported: 1 00:07:15.134 Keep Alive: Not Supported 00:07:15.134 00:07:15.134 NVM Command Set Attributes 00:07:15.134 ========================== 00:07:15.134 Submission Queue Entry Size 00:07:15.134 Max: 64 00:07:15.134 Min: 64 00:07:15.134 Completion Queue Entry Size 00:07:15.134 Max: 16 00:07:15.134 Min: 16 00:07:15.134 Number of Namespaces: 256 00:07:15.134 Compare Command: Supported 00:07:15.134 Write Uncorrectable Command: Not Supported 00:07:15.134 Dataset Management Command: Supported 00:07:15.134 Write Zeroes Command: Supported 00:07:15.134 Set Features Save Field: Supported 00:07:15.134 Reservations: Not Supported 00:07:15.134 Timestamp: Supported 00:07:15.134 Copy: Supported 00:07:15.134 Volatile Write Cache: Present 00:07:15.134 Atomic Write Unit (Normal): 1 00:07:15.134 Atomic Write Unit (PFail): 1 00:07:15.134 Atomic Compare & Write Unit: 1 00:07:15.134 Fused Compare & Write: Not Supported 00:07:15.134 Scatter-Gather List 00:07:15.134 SGL Command Set: Supported 00:07:15.134 SGL Keyed: Not Supported 00:07:15.134 SGL Bit Bucket Descriptor: Not Supported 00:07:15.134 SGL Metadata Pointer: Not Supported 00:07:15.134 Oversized SGL: Not Supported 00:07:15.134 SGL Metadata Address: Not Supported 00:07:15.134 SGL Offset: Not Supported 00:07:15.134 Transport SGL Data Block: Not Supported 00:07:15.134 Replay Protected Memory Block: Not Supported 00:07:15.134 00:07:15.134 Firmware Slot Information 00:07:15.134 ========================= 00:07:15.134 Active slot: 1 00:07:15.134 Slot 1 Firmware Revision: 1.0 00:07:15.134 00:07:15.134 00:07:15.134 Commands Supported and Effects 00:07:15.134 ============================== 00:07:15.134 Admin Commands 00:07:15.134 -------------- 00:07:15.134 Delete I/O Submission Queue (00h): Supported 00:07:15.134 Create I/O Submission Queue (01h): Supported 00:07:15.134 Get Log Page (02h): Supported 00:07:15.134 Delete I/O Completion Queue (04h): Supported 00:07:15.134 Create I/O Completion Queue (05h): Supported 00:07:15.134 Identify (06h): Supported 00:07:15.134 Abort (08h): Supported 00:07:15.134 Set Features (09h): Supported 00:07:15.134 Get Features (0Ah): Supported 00:07:15.134 Asynchronous Event Request (0Ch): Supported 00:07:15.134 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:15.134 Directive Send (19h): Supported 00:07:15.134 Directive Receive (1Ah): Supported 00:07:15.134 Virtualization Management (1Ch): Supported 00:07:15.134 Doorbell Buffer Config (7Ch): Supported 00:07:15.134 Format NVM (80h): Supported LBA-Change 00:07:15.134 I/O Commands 00:07:15.134 ------------ 00:07:15.134 Flush (00h): Supported LBA-Change 00:07:15.134 Write (01h): Supported LBA-Change 00:07:15.134 Read (02h): Supported 00:07:15.134 Compare (05h): Supported 00:07:15.134 Write Zeroes (08h): Supported LBA-Change 00:07:15.134 Dataset Management (09h): Supported LBA-Change 00:07:15.134 Unknown (0Ch): Supported 00:07:15.134 Unknown (12h): Supported 00:07:15.134 Copy (19h): Supported LBA-Change 00:07:15.134 Unknown (1Dh): Supported LBA-Change 00:07:15.134 00:07:15.134 Error Log 00:07:15.134 ========= 00:07:15.134 00:07:15.134 Arbitration 00:07:15.134 =========== 00:07:15.134 Arbitration Burst: no limit 00:07:15.134 00:07:15.134 Power Management 00:07:15.134 ================ 00:07:15.134 Number of Power States: 1 00:07:15.134 Current Power State: Power State #0 00:07:15.134 Power State #0: 00:07:15.134 Max Power: 25.00 W 00:07:15.134 Non-Operational State: Operational 00:07:15.134 Entry Latency: 16 microseconds 00:07:15.134 Exit Latency: 4 microseconds 00:07:15.134 Relative Read Throughput: 0 00:07:15.134 Relative Read Latency: 0 00:07:15.134 Relative Write Throughput: 0 00:07:15.134 Relative Write Latency: 0 00:07:15.134 Idle Power: Not Reported 00:07:15.134 Active Power: Not Reported 00:07:15.134 Non-Operational Permissive Mode: Not Supported 00:07:15.134 00:07:15.134 Health Information 00:07:15.134 ================== 00:07:15.134 Critical Warnings: 00:07:15.134 Available Spare Space: OK 00:07:15.134 Temperature: OK 00:07:15.134 Device Reliability: OK 00:07:15.134 Read Only: No 00:07:15.134 Volatile Memory Backup: OK 00:07:15.134 Current Temperature: 323 Kelvin (50 Celsius) 00:07:15.134 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:15.134 Available Spare: 0% 00:07:15.134 Available Spare Threshold: 0% 00:07:15.134 Life Percentage Used: 0% 00:07:15.134 Data Units Read: 765 00:07:15.134 Data Units Written: 694 00:07:15.134 Host Read Commands: 36832 00:07:15.134 Host Write Commands: 36258 00:07:15.134 Controller Busy Time: 0 minutes 00:07:15.134 Power Cycles: 0 00:07:15.134 Power On Hours: 0 hours 00:07:15.134 Unsafe Shutdowns: 0 00:07:15.134 Unrecoverable Media Errors: 0 00:07:15.134 Lifetime Error Log Entries: 0 00:07:15.134 Warning Temperature Time: 0 minutes 00:07:15.134 Critical Temperature Time: 0 minutes 00:07:15.134 00:07:15.134 Number of Queues 00:07:15.134 ================ 00:07:15.134 Number of I/O Submission Queues: 64 00:07:15.134 Number of I/O Completion Queues: 64 00:07:15.134 00:07:15.134 ZNS Specific Controller Data 00:07:15.134 ============================ 00:07:15.134 Zone Append Size Limit: 0 00:07:15.134 00:07:15.134 00:07:15.134 Active Namespaces 00:07:15.134 ================= 00:07:15.134 Namespace ID:1 00:07:15.135 Error Recovery Timeout: Unlimited 00:07:15.135 Command Set Identifier: NVM (00h) 00:07:15.135 Deallocate: Supported 00:07:15.135 Deallocated/Unwritten Error: Supported 00:07:15.135 Deallocated Read Value: All 0x00 00:07:15.135 Deallocate in Write Zeroes: Not Supported 00:07:15.135 Deallocated Guard Field: 0xFFFF 00:07:15.135 Flush: Supported 00:07:15.135 Reservation: Not Supported 00:07:15.135 Namespace Sharing Capabilities: Multiple Controllers 00:07:15.135 Size (in LBAs): 262144 (1GiB) 00:07:15.135 Capacity (in LBAs): 262144 (1GiB) 00:07:15.135 Utilization (in LBAs): 262144 (1GiB) 00:07:15.135 Thin Provisioning: Not Supported 00:07:15.135 Per-NS Atomic Units: No 00:07:15.135 Maximum Single Source Range Length: 128 00:07:15.135 Maximum Copy Length: 128 00:07:15.135 Maximum Source Range Count: 128 00:07:15.135 NGUID/EUI64 Never Reused: No 00:07:15.135 Namespace Write Protected: No 00:07:15.135 Endurance group ID: 1 00:07:15.135 Number of LBA Formats: 8 00:07:15.135 Current LBA Format: LBA Format #04 00:07:15.135 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:15.135 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:15.135 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:15.135 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:15.135 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:15.135 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:15.135 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:15.135 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:15.135 00:07:15.135 Get Feature FDP: 00:07:15.135 ================ 00:07:15.135 Enabled: Yes 00:07:15.135 FDP configuration index: 0 00:07:15.135 00:07:15.135 FDP configurations log page 00:07:15.135 =========================== 00:07:15.135 Number of FDP configurations: 1 00:07:15.135 Version: 0 00:07:15.135 Size: 112 00:07:15.135 FDP Configuration Descriptor: 0 00:07:15.135 Descriptor Size: 96 00:07:15.135 Reclaim Group Identifier format: 2 00:07:15.135 FDP Volatile Write Cache: Not Present 00:07:15.135 FDP Configuration: Valid 00:07:15.135 Vendor Specific Size: 0 00:07:15.135 Number of Reclaim Groups: 2 00:07:15.135 Number of Recalim Unit Handles: 8 00:07:15.135 Max Placement Identifiers: 128 00:07:15.135 Number of Namespaces Suppprted: 256 00:07:15.135 Reclaim unit Nominal Size: 6000000 bytes 00:07:15.135 Estimated Reclaim Unit Time Limit: Not Reported 00:07:15.135 RUH Desc #000: RUH Type: Initially Isolated 00:07:15.135 RUH Desc #001: RUH Type: Initially Isolated 00:07:15.135 RUH Desc #002: RUH Type: Initially Isolated 00:07:15.135 RUH Desc #003: RUH Type: Initially Isolated 00:07:15.135 RUH Desc #004: RUH Type: Initially Isolated 00:07:15.135 RUH Desc #005: RUH Type: Initially Isolated 00:07:15.135 RUH Desc #006: RUH Type: Initially Isolated 00:07:15.135 RUH Desc #007: RUH Type: Initially Isolated 00:07:15.135 00:07:15.135 FDP reclaim unit handle usage log page 00:07:15.135 ====================================== 00:07:15.135 Number of Reclaim Unit Handles: 8 00:07:15.135 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:15.135 RUH Usage Desc #001: RUH Attributes: Unused 00:07:15.135 RUH Usage Desc #002: RUH Attributes: Unused 00:07:15.135 RUH Usage Desc #003: RUH Attributes: Unused 00:07:15.135 RUH Usage Desc #004: RUH Attributes: Unused 00:07:15.135 RUH Usage Desc #005: RUH Attributes: Unused 00:07:15.135 RUH Usage Desc #006: RUH Attributes: Unused 00:07:15.135 RUH Usage Desc #007: RUH Attributes: Unused 00:07:15.135 00:07:15.135 FDP statistics log page 00:07:15.135 ======================= 00:07:15.135 Host bytes with metadata written: 443850752 00:07:15.135 Media bytes with metadata written: 443904000 00:07:15.135 Media bytes erased: 0 00:07:15.135 00:07:15.135 FDP events log page 00:07:15.135 =================== 00:07:15.135 Number of FDP events: 0 00:07:15.135 00:07:15.135 NVM Specific Namespace Data 00:07:15.135 =========================== 00:07:15.135 Logical Block Storage Tag Mask: 0 00:07:15.135 Protection Information Capabilities: 00:07:15.135 16b Guard Protection Information Storage Tag Support: No 00:07:15.135 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:15.135 Storage Tag Check Read Support: No 00:07:15.135 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:15.135 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:15.135 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:15.135 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:15.135 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:15.135 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:15.135 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:15.135 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:15.135 00:07:15.135 real 0m1.005s 00:07:15.135 user 0m0.361s 00:07:15.135 sys 0m0.433s 00:07:15.135 02:54:31 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.135 ************************************ 00:07:15.135 END TEST nvme_identify 00:07:15.135 ************************************ 00:07:15.135 02:54:31 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:15.135 02:54:31 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:15.135 02:54:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.135 02:54:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.135 02:54:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:15.135 ************************************ 00:07:15.135 START TEST nvme_perf 00:07:15.135 ************************************ 00:07:15.135 02:54:31 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:15.135 02:54:31 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:16.578 Initializing NVMe Controllers 00:07:16.578 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:16.578 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:16.578 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:16.578 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:16.578 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:16.578 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:16.578 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:16.578 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:16.578 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:16.578 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:16.578 Initialization complete. Launching workers. 00:07:16.578 ======================================================== 00:07:16.578 Latency(us) 00:07:16.578 Device Information : IOPS MiB/s Average min max 00:07:16.578 PCIE (0000:00:10.0) NSID 1 from core 0: 11977.30 140.36 10694.50 5959.46 29051.41 00:07:16.578 PCIE (0000:00:11.0) NSID 1 from core 0: 11977.30 140.36 10690.64 5990.43 28412.44 00:07:16.578 PCIE (0000:00:13.0) NSID 1 from core 0: 11977.30 140.36 10682.06 5991.99 29121.18 00:07:16.578 PCIE (0000:00:12.0) NSID 1 from core 0: 11977.30 140.36 10673.30 6073.52 28474.27 00:07:16.578 PCIE (0000:00:12.0) NSID 2 from core 0: 11977.30 140.36 10664.31 6158.42 27949.06 00:07:16.578 PCIE (0000:00:12.0) NSID 3 from core 0: 11977.30 140.36 10655.70 5146.24 27396.43 00:07:16.578 ======================================================== 00:07:16.578 Total : 71863.80 842.15 10676.75 5146.24 29121.18 00:07:16.578 00:07:16.578 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:16.578 ================================================================================= 00:07:16.578 1.00000% : 6604.012us 00:07:16.578 10.00000% : 7612.258us 00:07:16.578 25.00000% : 8368.443us 00:07:16.578 50.00000% : 9729.575us 00:07:16.578 75.00000% : 12603.077us 00:07:16.578 90.00000% : 15123.692us 00:07:16.578 95.00000% : 16636.062us 00:07:16.578 98.00000% : 18047.606us 00:07:16.578 99.00000% : 20870.695us 00:07:16.578 99.50000% : 27625.945us 00:07:16.578 99.90000% : 28835.840us 00:07:16.578 99.99000% : 29037.489us 00:07:16.578 99.99900% : 29239.138us 00:07:16.578 99.99990% : 29239.138us 00:07:16.578 99.99999% : 29239.138us 00:07:16.578 00:07:16.579 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:16.579 ================================================================================= 00:07:16.579 1.00000% : 6604.012us 00:07:16.579 10.00000% : 7612.258us 00:07:16.579 25.00000% : 8368.443us 00:07:16.579 50.00000% : 9729.575us 00:07:16.579 75.00000% : 12552.665us 00:07:16.579 90.00000% : 15123.692us 00:07:16.579 95.00000% : 16636.062us 00:07:16.579 98.00000% : 18148.431us 00:07:16.579 99.00000% : 20870.695us 00:07:16.579 99.50000% : 27625.945us 00:07:16.579 99.90000% : 28230.892us 00:07:16.579 99.99000% : 28432.542us 00:07:16.579 99.99900% : 28432.542us 00:07:16.579 99.99990% : 28432.542us 00:07:16.579 99.99999% : 28432.542us 00:07:16.579 00:07:16.579 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:16.579 ================================================================================= 00:07:16.579 1.00000% : 6604.012us 00:07:16.579 10.00000% : 7612.258us 00:07:16.579 25.00000% : 8318.031us 00:07:16.579 50.00000% : 9729.575us 00:07:16.579 75.00000% : 12451.840us 00:07:16.579 90.00000% : 15224.517us 00:07:16.579 95.00000% : 16333.588us 00:07:16.579 98.00000% : 17946.782us 00:07:16.579 99.00000% : 20870.695us 00:07:16.579 99.50000% : 28029.243us 00:07:16.579 99.90000% : 29037.489us 00:07:16.579 99.99000% : 29239.138us 00:07:16.579 99.99900% : 29239.138us 00:07:16.579 99.99990% : 29239.138us 00:07:16.579 99.99999% : 29239.138us 00:07:16.579 00:07:16.579 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:16.579 ================================================================================= 00:07:16.579 1.00000% : 6654.425us 00:07:16.579 10.00000% : 7612.258us 00:07:16.579 25.00000% : 8318.031us 00:07:16.579 50.00000% : 9729.575us 00:07:16.579 75.00000% : 12502.252us 00:07:16.579 90.00000% : 15224.517us 00:07:16.579 95.00000% : 16333.588us 00:07:16.579 98.00000% : 18350.080us 00:07:16.579 99.00000% : 20769.871us 00:07:16.579 99.50000% : 27625.945us 00:07:16.579 99.90000% : 28432.542us 00:07:16.579 99.99000% : 28634.191us 00:07:16.579 99.99900% : 28634.191us 00:07:16.579 99.99990% : 28634.191us 00:07:16.579 99.99999% : 28634.191us 00:07:16.579 00:07:16.579 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:16.579 ================================================================================= 00:07:16.579 1.00000% : 6704.837us 00:07:16.579 10.00000% : 7612.258us 00:07:16.579 25.00000% : 8318.031us 00:07:16.579 50.00000% : 9729.575us 00:07:16.579 75.00000% : 12603.077us 00:07:16.579 90.00000% : 15022.868us 00:07:16.579 95.00000% : 16434.412us 00:07:16.579 98.00000% : 18047.606us 00:07:16.579 99.00000% : 20568.222us 00:07:16.579 99.50000% : 27020.997us 00:07:16.579 99.90000% : 27827.594us 00:07:16.579 99.99000% : 28029.243us 00:07:16.579 99.99900% : 28029.243us 00:07:16.579 99.99990% : 28029.243us 00:07:16.579 99.99999% : 28029.243us 00:07:16.579 00:07:16.579 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:16.579 ================================================================================= 00:07:16.579 1.00000% : 6452.775us 00:07:16.579 10.00000% : 7612.258us 00:07:16.579 25.00000% : 8318.031us 00:07:16.579 50.00000% : 9679.163us 00:07:16.579 75.00000% : 12603.077us 00:07:16.579 90.00000% : 15224.517us 00:07:16.579 95.00000% : 16535.237us 00:07:16.579 98.00000% : 17946.782us 00:07:16.579 99.00000% : 20467.397us 00:07:16.579 99.50000% : 26416.049us 00:07:16.579 99.90000% : 27222.646us 00:07:16.579 99.99000% : 27424.295us 00:07:16.579 99.99900% : 27424.295us 00:07:16.579 99.99990% : 27424.295us 00:07:16.579 99.99999% : 27424.295us 00:07:16.579 00:07:16.579 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:16.579 ============================================================================== 00:07:16.579 Range in us Cumulative IO count 00:07:16.579 5948.652 - 5973.858: 0.0083% ( 1) 00:07:16.579 5973.858 - 5999.065: 0.0499% ( 5) 00:07:16.579 5999.065 - 6024.271: 0.0665% ( 2) 00:07:16.579 6024.271 - 6049.477: 0.0997% ( 4) 00:07:16.579 6049.477 - 6074.683: 0.1330% ( 4) 00:07:16.579 6074.683 - 6099.889: 0.1496% ( 2) 00:07:16.579 6099.889 - 6125.095: 0.1745% ( 3) 00:07:16.579 6125.095 - 6150.302: 0.2078% ( 4) 00:07:16.579 6150.302 - 6175.508: 0.2327% ( 3) 00:07:16.579 6175.508 - 6200.714: 0.2660% ( 4) 00:07:16.579 6200.714 - 6225.920: 0.2826% ( 2) 00:07:16.579 6225.920 - 6251.126: 0.3241% ( 5) 00:07:16.579 6251.126 - 6276.332: 0.3574% ( 4) 00:07:16.579 6276.332 - 6301.538: 0.4072% ( 6) 00:07:16.579 6301.538 - 6326.745: 0.4322% ( 3) 00:07:16.579 6326.745 - 6351.951: 0.4820% ( 6) 00:07:16.579 6351.951 - 6377.157: 0.5070% ( 3) 00:07:16.579 6377.157 - 6402.363: 0.5818% ( 9) 00:07:16.579 6402.363 - 6427.569: 0.6233% ( 5) 00:07:16.579 6427.569 - 6452.775: 0.6649% ( 5) 00:07:16.579 6452.775 - 6503.188: 0.8311% ( 20) 00:07:16.579 6503.188 - 6553.600: 0.9890% ( 19) 00:07:16.579 6553.600 - 6604.012: 1.2217% ( 28) 00:07:16.579 6604.012 - 6654.425: 1.4628% ( 29) 00:07:16.579 6654.425 - 6704.837: 1.7370% ( 33) 00:07:16.579 6704.837 - 6755.249: 2.0113% ( 33) 00:07:16.579 6755.249 - 6805.662: 2.3687% ( 43) 00:07:16.579 6805.662 - 6856.074: 2.6845% ( 38) 00:07:16.579 6856.074 - 6906.486: 3.0419% ( 43) 00:07:16.579 6906.486 - 6956.898: 3.3411% ( 36) 00:07:16.579 6956.898 - 7007.311: 3.6985% ( 43) 00:07:16.579 7007.311 - 7057.723: 4.0891% ( 47) 00:07:16.579 7057.723 - 7108.135: 4.4382% ( 42) 00:07:16.579 7108.135 - 7158.548: 4.8039% ( 44) 00:07:16.579 7158.548 - 7208.960: 5.2111% ( 49) 00:07:16.579 7208.960 - 7259.372: 5.6682% ( 55) 00:07:16.579 7259.372 - 7309.785: 6.1669% ( 60) 00:07:16.579 7309.785 - 7360.197: 6.7487% ( 70) 00:07:16.579 7360.197 - 7410.609: 7.4385% ( 83) 00:07:16.579 7410.609 - 7461.022: 8.1782% ( 89) 00:07:16.579 7461.022 - 7511.434: 8.8680% ( 83) 00:07:16.579 7511.434 - 7561.846: 9.5828% ( 86) 00:07:16.579 7561.846 - 7612.258: 10.2560% ( 81) 00:07:16.579 7612.258 - 7662.671: 10.9126% ( 79) 00:07:16.579 7662.671 - 7713.083: 11.7271% ( 98) 00:07:16.579 7713.083 - 7763.495: 12.5914% ( 104) 00:07:16.579 7763.495 - 7813.908: 13.4890% ( 108) 00:07:16.579 7813.908 - 7864.320: 14.4199% ( 112) 00:07:16.579 7864.320 - 7914.732: 15.3840% ( 116) 00:07:16.579 7914.732 - 7965.145: 16.4478% ( 128) 00:07:16.579 7965.145 - 8015.557: 17.5615% ( 134) 00:07:16.579 8015.557 - 8065.969: 18.6752% ( 134) 00:07:16.579 8065.969 - 8116.382: 19.8969% ( 147) 00:07:16.579 8116.382 - 8166.794: 21.1686% ( 153) 00:07:16.579 8166.794 - 8217.206: 22.4318% ( 152) 00:07:16.579 8217.206 - 8267.618: 23.6203% ( 143) 00:07:16.579 8267.618 - 8318.031: 24.8005% ( 142) 00:07:16.579 8318.031 - 8368.443: 25.8810% ( 130) 00:07:16.579 8368.443 - 8418.855: 27.0695% ( 143) 00:07:16.579 8418.855 - 8469.268: 28.1333% ( 128) 00:07:16.579 8469.268 - 8519.680: 29.4631% ( 160) 00:07:16.579 8519.680 - 8570.092: 30.6267% ( 140) 00:07:16.579 8570.092 - 8620.505: 31.8650% ( 149) 00:07:16.579 8620.505 - 8670.917: 32.9787% ( 134) 00:07:16.579 8670.917 - 8721.329: 34.2919% ( 158) 00:07:16.579 8721.329 - 8771.742: 35.2892% ( 120) 00:07:16.579 8771.742 - 8822.154: 36.5941% ( 157) 00:07:16.579 8822.154 - 8872.566: 37.6247% ( 124) 00:07:16.579 8872.566 - 8922.978: 38.6719% ( 126) 00:07:16.579 8922.978 - 8973.391: 39.6941% ( 123) 00:07:16.579 8973.391 - 9023.803: 40.5918% ( 108) 00:07:16.579 9023.803 - 9074.215: 41.3730% ( 94) 00:07:16.579 9074.215 - 9124.628: 42.1459% ( 93) 00:07:16.579 9124.628 - 9175.040: 42.8856% ( 89) 00:07:16.579 9175.040 - 9225.452: 43.6420% ( 91) 00:07:16.579 9225.452 - 9275.865: 44.3983% ( 91) 00:07:16.579 9275.865 - 9326.277: 45.1712% ( 93) 00:07:16.579 9326.277 - 9376.689: 45.9691% ( 96) 00:07:16.579 9376.689 - 9427.102: 46.6922% ( 87) 00:07:16.579 9427.102 - 9477.514: 47.4235% ( 88) 00:07:16.579 9477.514 - 9527.926: 48.0386% ( 74) 00:07:16.579 9527.926 - 9578.338: 48.7284% ( 83) 00:07:16.579 9578.338 - 9628.751: 49.4515% ( 87) 00:07:16.579 9628.751 - 9679.163: 49.9834% ( 64) 00:07:16.579 9679.163 - 9729.575: 50.6815% ( 84) 00:07:16.579 9729.575 - 9779.988: 51.2633% ( 70) 00:07:16.579 9779.988 - 9830.400: 51.7952% ( 64) 00:07:16.579 9830.400 - 9880.812: 52.3022% ( 61) 00:07:16.579 9880.812 - 9931.225: 53.0419% ( 89) 00:07:16.579 9931.225 - 9981.637: 53.5738% ( 64) 00:07:16.579 9981.637 - 10032.049: 54.0226% ( 54) 00:07:16.579 10032.049 - 10082.462: 54.5379% ( 62) 00:07:16.579 10082.462 - 10132.874: 55.0283% ( 59) 00:07:16.579 10132.874 - 10183.286: 55.5685% ( 65) 00:07:16.579 10183.286 - 10233.698: 56.0173% ( 54) 00:07:16.579 10233.698 - 10284.111: 56.5243% ( 61) 00:07:16.579 10284.111 - 10334.523: 56.9149% ( 47) 00:07:16.579 10334.523 - 10384.935: 57.3055% ( 47) 00:07:16.579 10384.935 - 10435.348: 57.7294% ( 51) 00:07:16.579 10435.348 - 10485.760: 58.1782% ( 54) 00:07:16.579 10485.760 - 10536.172: 58.5688% ( 47) 00:07:16.579 10536.172 - 10586.585: 59.0758% ( 61) 00:07:16.579 10586.585 - 10636.997: 59.4747% ( 48) 00:07:16.579 10636.997 - 10687.409: 59.8238% ( 42) 00:07:16.579 10687.409 - 10737.822: 60.1313% ( 37) 00:07:16.579 10737.822 - 10788.234: 60.5219% ( 47) 00:07:16.579 10788.234 - 10838.646: 60.8295% ( 37) 00:07:16.579 10838.646 - 10889.058: 61.1619% ( 40) 00:07:16.579 10889.058 - 10939.471: 61.5608% ( 48) 00:07:16.579 10939.471 - 10989.883: 61.9432% ( 46) 00:07:16.579 10989.883 - 11040.295: 62.3920% ( 54) 00:07:16.579 11040.295 - 11090.708: 62.8324% ( 53) 00:07:16.580 11090.708 - 11141.120: 63.2314% ( 48) 00:07:16.580 11141.120 - 11191.532: 63.6885% ( 55) 00:07:16.580 11191.532 - 11241.945: 64.0376% ( 42) 00:07:16.580 11241.945 - 11292.357: 64.4448% ( 49) 00:07:16.580 11292.357 - 11342.769: 64.9186% ( 57) 00:07:16.580 11342.769 - 11393.182: 65.4255% ( 61) 00:07:16.580 11393.182 - 11443.594: 65.8494% ( 51) 00:07:16.580 11443.594 - 11494.006: 66.3065% ( 55) 00:07:16.580 11494.006 - 11544.418: 66.7055% ( 48) 00:07:16.580 11544.418 - 11594.831: 67.1210% ( 50) 00:07:16.580 11594.831 - 11645.243: 67.5033% ( 46) 00:07:16.580 11645.243 - 11695.655: 67.9355% ( 52) 00:07:16.580 11695.655 - 11746.068: 68.4425% ( 61) 00:07:16.580 11746.068 - 11796.480: 68.9079% ( 56) 00:07:16.580 11796.480 - 11846.892: 69.3318% ( 51) 00:07:16.580 11846.892 - 11897.305: 69.7307% ( 48) 00:07:16.580 11897.305 - 11947.717: 70.1878% ( 55) 00:07:16.580 11947.717 - 11998.129: 70.6699% ( 58) 00:07:16.580 11998.129 - 12048.542: 71.0938% ( 51) 00:07:16.580 12048.542 - 12098.954: 71.4511% ( 43) 00:07:16.580 12098.954 - 12149.366: 71.9249% ( 57) 00:07:16.580 12149.366 - 12199.778: 72.2324% ( 37) 00:07:16.580 12199.778 - 12250.191: 72.6396% ( 49) 00:07:16.580 12250.191 - 12300.603: 73.0136% ( 45) 00:07:16.580 12300.603 - 12351.015: 73.4292% ( 50) 00:07:16.580 12351.015 - 12401.428: 73.8364% ( 49) 00:07:16.580 12401.428 - 12451.840: 74.1938% ( 43) 00:07:16.580 12451.840 - 12502.252: 74.4930% ( 36) 00:07:16.580 12502.252 - 12552.665: 74.8670% ( 45) 00:07:16.580 12552.665 - 12603.077: 75.2410% ( 45) 00:07:16.580 12603.077 - 12653.489: 75.6067% ( 44) 00:07:16.580 12653.489 - 12703.902: 76.0223% ( 50) 00:07:16.580 12703.902 - 12754.314: 76.3797% ( 43) 00:07:16.580 12754.314 - 12804.726: 76.7287% ( 42) 00:07:16.580 12804.726 - 12855.138: 77.0695% ( 41) 00:07:16.580 12855.138 - 12905.551: 77.3853% ( 38) 00:07:16.580 12905.551 - 13006.375: 78.0419% ( 79) 00:07:16.580 13006.375 - 13107.200: 78.8314% ( 95) 00:07:16.580 13107.200 - 13208.025: 79.5545% ( 87) 00:07:16.580 13208.025 - 13308.849: 80.2443% ( 83) 00:07:16.580 13308.849 - 13409.674: 80.9508% ( 85) 00:07:16.580 13409.674 - 13510.498: 81.6489% ( 84) 00:07:16.580 13510.498 - 13611.323: 82.3969% ( 90) 00:07:16.580 13611.323 - 13712.148: 83.1533% ( 91) 00:07:16.580 13712.148 - 13812.972: 83.9345% ( 94) 00:07:16.580 13812.972 - 13913.797: 84.6825% ( 90) 00:07:16.580 13913.797 - 14014.622: 85.3640% ( 82) 00:07:16.580 14014.622 - 14115.446: 85.8461% ( 58) 00:07:16.580 14115.446 - 14216.271: 86.3614% ( 62) 00:07:16.580 14216.271 - 14317.095: 86.9182% ( 67) 00:07:16.580 14317.095 - 14417.920: 87.4751% ( 67) 00:07:16.580 14417.920 - 14518.745: 88.0070% ( 64) 00:07:16.580 14518.745 - 14619.569: 88.4807% ( 57) 00:07:16.580 14619.569 - 14720.394: 88.8464% ( 44) 00:07:16.580 14720.394 - 14821.218: 89.3035% ( 55) 00:07:16.580 14821.218 - 14922.043: 89.6277% ( 39) 00:07:16.580 14922.043 - 15022.868: 89.9767% ( 42) 00:07:16.580 15022.868 - 15123.692: 90.2510% ( 33) 00:07:16.580 15123.692 - 15224.517: 90.5086% ( 31) 00:07:16.580 15224.517 - 15325.342: 90.7829% ( 33) 00:07:16.580 15325.342 - 15426.166: 91.0738% ( 35) 00:07:16.580 15426.166 - 15526.991: 91.3481% ( 33) 00:07:16.580 15526.991 - 15627.815: 91.6390% ( 35) 00:07:16.580 15627.815 - 15728.640: 92.0878% ( 54) 00:07:16.580 15728.640 - 15829.465: 92.3703% ( 34) 00:07:16.580 15829.465 - 15930.289: 92.6779% ( 37) 00:07:16.580 15930.289 - 16031.114: 93.0186% ( 41) 00:07:16.580 16031.114 - 16131.938: 93.4425% ( 51) 00:07:16.580 16131.938 - 16232.763: 93.7334% ( 35) 00:07:16.580 16232.763 - 16333.588: 94.0492% ( 38) 00:07:16.580 16333.588 - 16434.412: 94.3567% ( 37) 00:07:16.580 16434.412 - 16535.237: 94.7224% ( 44) 00:07:16.580 16535.237 - 16636.062: 95.0382% ( 38) 00:07:16.580 16636.062 - 16736.886: 95.2543% ( 26) 00:07:16.580 16736.886 - 16837.711: 95.4870% ( 28) 00:07:16.580 16837.711 - 16938.535: 95.7613% ( 33) 00:07:16.580 16938.535 - 17039.360: 96.0023% ( 29) 00:07:16.580 17039.360 - 17140.185: 96.2018% ( 24) 00:07:16.580 17140.185 - 17241.009: 96.4262% ( 27) 00:07:16.580 17241.009 - 17341.834: 96.6423% ( 26) 00:07:16.580 17341.834 - 17442.658: 96.8418% ( 24) 00:07:16.580 17442.658 - 17543.483: 97.0911% ( 30) 00:07:16.580 17543.483 - 17644.308: 97.4235% ( 40) 00:07:16.580 17644.308 - 17745.132: 97.6479% ( 27) 00:07:16.580 17745.132 - 17845.957: 97.7975% ( 18) 00:07:16.580 17845.957 - 17946.782: 97.9638% ( 20) 00:07:16.580 17946.782 - 18047.606: 98.1549% ( 23) 00:07:16.580 18047.606 - 18148.431: 98.3295% ( 21) 00:07:16.580 18148.431 - 18249.255: 98.5206% ( 23) 00:07:16.580 18249.255 - 18350.080: 98.6536% ( 16) 00:07:16.580 18350.080 - 18450.905: 98.7367% ( 10) 00:07:16.580 18450.905 - 18551.729: 98.8281% ( 11) 00:07:16.580 18551.729 - 18652.554: 98.8863% ( 7) 00:07:16.580 18652.554 - 18753.378: 98.9195% ( 4) 00:07:16.580 18753.378 - 18854.203: 98.9362% ( 2) 00:07:16.580 20568.222 - 20669.046: 98.9694% ( 4) 00:07:16.580 20669.046 - 20769.871: 98.9943% ( 3) 00:07:16.580 20769.871 - 20870.695: 99.0359% ( 5) 00:07:16.580 20870.695 - 20971.520: 99.0525% ( 2) 00:07:16.580 20971.520 - 21072.345: 99.0775% ( 3) 00:07:16.580 21072.345 - 21173.169: 99.1190% ( 5) 00:07:16.580 21173.169 - 21273.994: 99.1523% ( 4) 00:07:16.580 21273.994 - 21374.818: 99.1855% ( 4) 00:07:16.580 21374.818 - 21475.643: 99.2104% ( 3) 00:07:16.580 21475.643 - 21576.468: 99.2437% ( 4) 00:07:16.580 21576.468 - 21677.292: 99.2686% ( 3) 00:07:16.580 21677.292 - 21778.117: 99.3185% ( 6) 00:07:16.580 21778.117 - 21878.942: 99.3434% ( 3) 00:07:16.580 21878.942 - 21979.766: 99.3767% ( 4) 00:07:16.580 21979.766 - 22080.591: 99.4016% ( 3) 00:07:16.580 22080.591 - 22181.415: 99.4265% ( 3) 00:07:16.580 22282.240 - 22383.065: 99.4348% ( 1) 00:07:16.580 22383.065 - 22483.889: 99.4681% ( 4) 00:07:16.580 27424.295 - 27625.945: 99.5761% ( 13) 00:07:16.580 27625.945 - 27827.594: 99.6426% ( 8) 00:07:16.580 27827.594 - 28029.243: 99.6759% ( 4) 00:07:16.580 28029.243 - 28230.892: 99.7424% ( 8) 00:07:16.580 28230.892 - 28432.542: 99.8172% ( 9) 00:07:16.580 28432.542 - 28634.191: 99.8753% ( 7) 00:07:16.580 28634.191 - 28835.840: 99.9418% ( 8) 00:07:16.580 28835.840 - 29037.489: 99.9917% ( 6) 00:07:16.580 29037.489 - 29239.138: 100.0000% ( 1) 00:07:16.580 00:07:16.580 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:16.580 ============================================================================== 00:07:16.580 Range in us Cumulative IO count 00:07:16.580 5973.858 - 5999.065: 0.0166% ( 2) 00:07:16.580 5999.065 - 6024.271: 0.0332% ( 2) 00:07:16.580 6024.271 - 6049.477: 0.0499% ( 2) 00:07:16.580 6049.477 - 6074.683: 0.0665% ( 2) 00:07:16.580 6074.683 - 6099.889: 0.0997% ( 4) 00:07:16.580 6099.889 - 6125.095: 0.1330% ( 4) 00:07:16.580 6125.095 - 6150.302: 0.1745% ( 5) 00:07:16.580 6150.302 - 6175.508: 0.1995% ( 3) 00:07:16.580 6175.508 - 6200.714: 0.2244% ( 3) 00:07:16.580 6200.714 - 6225.920: 0.2576% ( 4) 00:07:16.580 6225.920 - 6251.126: 0.2909% ( 4) 00:07:16.580 6251.126 - 6276.332: 0.3241% ( 4) 00:07:16.580 6276.332 - 6301.538: 0.3574% ( 4) 00:07:16.580 6301.538 - 6326.745: 0.3906% ( 4) 00:07:16.580 6326.745 - 6351.951: 0.4488% ( 7) 00:07:16.580 6351.951 - 6377.157: 0.5236% ( 9) 00:07:16.580 6377.157 - 6402.363: 0.5652% ( 5) 00:07:16.580 6402.363 - 6427.569: 0.5984% ( 4) 00:07:16.580 6427.569 - 6452.775: 0.6400% ( 5) 00:07:16.580 6452.775 - 6503.188: 0.7480% ( 13) 00:07:16.580 6503.188 - 6553.600: 0.8644% ( 14) 00:07:16.580 6553.600 - 6604.012: 1.0888% ( 27) 00:07:16.580 6604.012 - 6654.425: 1.3880% ( 36) 00:07:16.580 6654.425 - 6704.837: 1.6373% ( 30) 00:07:16.580 6704.837 - 6755.249: 2.0196% ( 46) 00:07:16.580 6755.249 - 6805.662: 2.3936% ( 45) 00:07:16.580 6805.662 - 6856.074: 2.7593% ( 44) 00:07:16.580 6856.074 - 6906.486: 3.1998% ( 53) 00:07:16.580 6906.486 - 6956.898: 3.6154% ( 50) 00:07:16.580 6956.898 - 7007.311: 4.0226% ( 49) 00:07:16.580 7007.311 - 7057.723: 4.4132% ( 47) 00:07:16.580 7057.723 - 7108.135: 4.7955% ( 46) 00:07:16.580 7108.135 - 7158.548: 5.2028% ( 49) 00:07:16.581 7158.548 - 7208.960: 5.6017% ( 48) 00:07:16.581 7208.960 - 7259.372: 6.1336% ( 64) 00:07:16.581 7259.372 - 7309.785: 6.6822% ( 66) 00:07:16.581 7309.785 - 7360.197: 7.3305% ( 78) 00:07:16.581 7360.197 - 7410.609: 8.0203% ( 83) 00:07:16.581 7410.609 - 7461.022: 8.5771% ( 67) 00:07:16.581 7461.022 - 7511.434: 9.1174% ( 65) 00:07:16.581 7511.434 - 7561.846: 9.7573% ( 77) 00:07:16.581 7561.846 - 7612.258: 10.3225% ( 68) 00:07:16.581 7612.258 - 7662.671: 10.9126% ( 71) 00:07:16.581 7662.671 - 7713.083: 11.5525% ( 77) 00:07:16.581 7713.083 - 7763.495: 12.1592% ( 73) 00:07:16.581 7763.495 - 7813.908: 12.9072% ( 90) 00:07:16.581 7813.908 - 7864.320: 13.7633% ( 103) 00:07:16.581 7864.320 - 7914.732: 14.6692% ( 109) 00:07:16.581 7914.732 - 7965.145: 15.7164% ( 126) 00:07:16.581 7965.145 - 8015.557: 16.8717% ( 139) 00:07:16.581 8015.557 - 8065.969: 18.0269% ( 139) 00:07:16.581 8065.969 - 8116.382: 19.2570% ( 148) 00:07:16.581 8116.382 - 8166.794: 20.4621% ( 145) 00:07:16.581 8166.794 - 8217.206: 21.7503% ( 155) 00:07:16.581 8217.206 - 8267.618: 23.1217% ( 165) 00:07:16.581 8267.618 - 8318.031: 24.4432% ( 159) 00:07:16.581 8318.031 - 8368.443: 25.7064% ( 152) 00:07:16.581 8368.443 - 8418.855: 27.0445% ( 161) 00:07:16.581 8418.855 - 8469.268: 28.4408% ( 168) 00:07:16.581 8469.268 - 8519.680: 29.8703% ( 172) 00:07:16.581 8519.680 - 8570.092: 31.2417% ( 165) 00:07:16.581 8570.092 - 8620.505: 32.5798% ( 161) 00:07:16.581 8620.505 - 8670.917: 33.9345% ( 163) 00:07:16.581 8670.917 - 8721.329: 35.1562% ( 147) 00:07:16.581 8721.329 - 8771.742: 36.3281% ( 141) 00:07:16.581 8771.742 - 8822.154: 37.4501% ( 135) 00:07:16.581 8822.154 - 8872.566: 38.4059% ( 115) 00:07:16.581 8872.566 - 8922.978: 39.3451% ( 113) 00:07:16.581 8922.978 - 8973.391: 40.3092% ( 116) 00:07:16.581 8973.391 - 9023.803: 41.2566% ( 114) 00:07:16.581 9023.803 - 9074.215: 42.1210% ( 104) 00:07:16.581 9074.215 - 9124.628: 42.8939% ( 93) 00:07:16.581 9124.628 - 9175.040: 43.7001% ( 97) 00:07:16.581 9175.040 - 9225.452: 44.4398% ( 89) 00:07:16.581 9225.452 - 9275.865: 45.1961% ( 91) 00:07:16.581 9275.865 - 9326.277: 45.8361% ( 77) 00:07:16.581 9326.277 - 9376.689: 46.4594% ( 75) 00:07:16.581 9376.689 - 9427.102: 47.0578% ( 72) 00:07:16.581 9427.102 - 9477.514: 47.5565% ( 60) 00:07:16.581 9477.514 - 9527.926: 48.0635% ( 61) 00:07:16.581 9527.926 - 9578.338: 48.5705% ( 61) 00:07:16.581 9578.338 - 9628.751: 49.0858% ( 62) 00:07:16.581 9628.751 - 9679.163: 49.6676% ( 70) 00:07:16.581 9679.163 - 9729.575: 50.3324% ( 80) 00:07:16.581 9729.575 - 9779.988: 51.0057% ( 81) 00:07:16.581 9779.988 - 9830.400: 51.5209% ( 62) 00:07:16.581 9830.400 - 9880.812: 52.0279% ( 61) 00:07:16.581 9880.812 - 9931.225: 52.5349% ( 61) 00:07:16.581 9931.225 - 9981.637: 53.0336% ( 60) 00:07:16.581 9981.637 - 10032.049: 53.4990% ( 56) 00:07:16.581 10032.049 - 10082.462: 54.0808% ( 70) 00:07:16.581 10082.462 - 10132.874: 54.5961% ( 62) 00:07:16.581 10132.874 - 10183.286: 55.0698% ( 57) 00:07:16.581 10183.286 - 10233.698: 55.5934% ( 63) 00:07:16.581 10233.698 - 10284.111: 56.2417% ( 78) 00:07:16.581 10284.111 - 10334.523: 56.8816% ( 77) 00:07:16.581 10334.523 - 10384.935: 57.4717% ( 71) 00:07:16.581 10384.935 - 10435.348: 58.0286% ( 67) 00:07:16.581 10435.348 - 10485.760: 58.5688% ( 65) 00:07:16.581 10485.760 - 10536.172: 59.0675% ( 60) 00:07:16.581 10536.172 - 10586.585: 59.4498% ( 46) 00:07:16.581 10586.585 - 10636.997: 59.8820% ( 52) 00:07:16.581 10636.997 - 10687.409: 60.2560% ( 45) 00:07:16.581 10687.409 - 10737.822: 60.5967% ( 41) 00:07:16.581 10737.822 - 10788.234: 60.9458% ( 42) 00:07:16.581 10788.234 - 10838.646: 61.2284% ( 34) 00:07:16.581 10838.646 - 10889.058: 61.5775% ( 42) 00:07:16.581 10889.058 - 10939.471: 61.9681% ( 47) 00:07:16.581 10939.471 - 10989.883: 62.3587% ( 47) 00:07:16.581 10989.883 - 11040.295: 62.7161% ( 43) 00:07:16.581 11040.295 - 11090.708: 63.0818% ( 44) 00:07:16.581 11090.708 - 11141.120: 63.4807% ( 48) 00:07:16.581 11141.120 - 11191.532: 63.8547% ( 45) 00:07:16.581 11191.532 - 11241.945: 64.2038% ( 42) 00:07:16.581 11241.945 - 11292.357: 64.5445% ( 41) 00:07:16.581 11292.357 - 11342.769: 64.9352% ( 47) 00:07:16.581 11342.769 - 11393.182: 65.2261% ( 35) 00:07:16.581 11393.182 - 11443.594: 65.6167% ( 47) 00:07:16.581 11443.594 - 11494.006: 66.0489% ( 52) 00:07:16.581 11494.006 - 11544.418: 66.4478% ( 48) 00:07:16.581 11544.418 - 11594.831: 66.8966% ( 54) 00:07:16.581 11594.831 - 11645.243: 67.3122% ( 50) 00:07:16.581 11645.243 - 11695.655: 67.7610% ( 54) 00:07:16.581 11695.655 - 11746.068: 68.2680% ( 61) 00:07:16.581 11746.068 - 11796.480: 68.8165% ( 66) 00:07:16.581 11796.480 - 11846.892: 69.3235% ( 61) 00:07:16.581 11846.892 - 11897.305: 69.9136% ( 71) 00:07:16.581 11897.305 - 11947.717: 70.4205% ( 61) 00:07:16.581 11947.717 - 11998.129: 70.9026% ( 58) 00:07:16.581 11998.129 - 12048.542: 71.3348% ( 52) 00:07:16.581 12048.542 - 12098.954: 71.7337% ( 48) 00:07:16.581 12098.954 - 12149.366: 72.1576% ( 51) 00:07:16.581 12149.366 - 12199.778: 72.6313% ( 57) 00:07:16.581 12199.778 - 12250.191: 73.0386% ( 49) 00:07:16.581 12250.191 - 12300.603: 73.4541% ( 50) 00:07:16.581 12300.603 - 12351.015: 73.8614% ( 49) 00:07:16.581 12351.015 - 12401.428: 74.2271% ( 44) 00:07:16.581 12401.428 - 12451.840: 74.5512% ( 39) 00:07:16.581 12451.840 - 12502.252: 74.9003% ( 42) 00:07:16.581 12502.252 - 12552.665: 75.2743% ( 45) 00:07:16.581 12552.665 - 12603.077: 75.6400% ( 44) 00:07:16.581 12603.077 - 12653.489: 75.9807% ( 41) 00:07:16.581 12653.489 - 12703.902: 76.3049% ( 39) 00:07:16.581 12703.902 - 12754.314: 76.6456% ( 41) 00:07:16.581 12754.314 - 12804.726: 76.9781% ( 40) 00:07:16.581 12804.726 - 12855.138: 77.4186% ( 53) 00:07:16.581 12855.138 - 12905.551: 77.7427% ( 39) 00:07:16.581 12905.551 - 13006.375: 78.4408% ( 84) 00:07:16.581 13006.375 - 13107.200: 78.9727% ( 64) 00:07:16.581 13107.200 - 13208.025: 79.5711% ( 72) 00:07:16.581 13208.025 - 13308.849: 80.2277% ( 79) 00:07:16.581 13308.849 - 13409.674: 80.9176% ( 83) 00:07:16.581 13409.674 - 13510.498: 81.6157% ( 84) 00:07:16.581 13510.498 - 13611.323: 82.2224% ( 73) 00:07:16.581 13611.323 - 13712.148: 82.7045% ( 58) 00:07:16.581 13712.148 - 13812.972: 83.2281% ( 63) 00:07:16.581 13812.972 - 13913.797: 83.7849% ( 67) 00:07:16.581 13913.797 - 14014.622: 84.2836% ( 60) 00:07:16.581 14014.622 - 14115.446: 84.7573% ( 57) 00:07:16.581 14115.446 - 14216.271: 85.2227% ( 56) 00:07:16.581 14216.271 - 14317.095: 85.7463% ( 63) 00:07:16.581 14317.095 - 14417.920: 86.3032% ( 67) 00:07:16.581 14417.920 - 14518.745: 86.9016% ( 72) 00:07:16.581 14518.745 - 14619.569: 87.5499% ( 78) 00:07:16.581 14619.569 - 14720.394: 88.1649% ( 74) 00:07:16.581 14720.394 - 14821.218: 88.7134% ( 66) 00:07:16.581 14821.218 - 14922.043: 89.1789% ( 56) 00:07:16.581 14922.043 - 15022.868: 89.6692% ( 59) 00:07:16.581 15022.868 - 15123.692: 90.1845% ( 62) 00:07:16.581 15123.692 - 15224.517: 90.6666% ( 58) 00:07:16.581 15224.517 - 15325.342: 91.0821% ( 50) 00:07:16.581 15325.342 - 15426.166: 91.5392% ( 55) 00:07:16.581 15426.166 - 15526.991: 91.9132% ( 45) 00:07:16.581 15526.991 - 15627.815: 92.2789% ( 44) 00:07:16.581 15627.815 - 15728.640: 92.6612% ( 46) 00:07:16.581 15728.640 - 15829.465: 93.0352% ( 45) 00:07:16.581 15829.465 - 15930.289: 93.3095% ( 33) 00:07:16.581 15930.289 - 16031.114: 93.5672% ( 31) 00:07:16.581 16031.114 - 16131.938: 93.7916% ( 27) 00:07:16.581 16131.938 - 16232.763: 94.0243% ( 28) 00:07:16.581 16232.763 - 16333.588: 94.2570% ( 28) 00:07:16.581 16333.588 - 16434.412: 94.5645% ( 37) 00:07:16.581 16434.412 - 16535.237: 94.8637% ( 36) 00:07:16.581 16535.237 - 16636.062: 95.2045% ( 41) 00:07:16.581 16636.062 - 16736.886: 95.4704% ( 32) 00:07:16.581 16736.886 - 16837.711: 95.7197% ( 30) 00:07:16.581 16837.711 - 16938.535: 95.9857% ( 32) 00:07:16.581 16938.535 - 17039.360: 96.2517% ( 32) 00:07:16.581 17039.360 - 17140.185: 96.5259% ( 33) 00:07:16.581 17140.185 - 17241.009: 96.7753% ( 30) 00:07:16.581 17241.009 - 17341.834: 96.9747% ( 24) 00:07:16.581 17341.834 - 17442.658: 97.1576% ( 22) 00:07:16.581 17442.658 - 17543.483: 97.3155% ( 19) 00:07:16.581 17543.483 - 17644.308: 97.4568% ( 17) 00:07:16.581 17644.308 - 17745.132: 97.5731% ( 14) 00:07:16.581 17745.132 - 17845.957: 97.6812% ( 13) 00:07:16.581 17845.957 - 17946.782: 97.7975% ( 14) 00:07:16.581 17946.782 - 18047.606: 97.9555% ( 19) 00:07:16.581 18047.606 - 18148.431: 98.1217% ( 20) 00:07:16.581 18148.431 - 18249.255: 98.2547% ( 16) 00:07:16.581 18249.255 - 18350.080: 98.3544% ( 12) 00:07:16.581 18350.080 - 18450.905: 98.4292% ( 9) 00:07:16.581 18450.905 - 18551.729: 98.5206% ( 11) 00:07:16.581 18551.729 - 18652.554: 98.6120% ( 11) 00:07:16.581 18652.554 - 18753.378: 98.6868% ( 9) 00:07:16.581 18753.378 - 18854.203: 98.7783% ( 11) 00:07:16.581 18854.203 - 18955.028: 98.8447% ( 8) 00:07:16.581 18955.028 - 19055.852: 98.8780% ( 4) 00:07:16.581 19055.852 - 19156.677: 98.9112% ( 4) 00:07:16.581 19156.677 - 19257.502: 98.9362% ( 3) 00:07:16.581 20669.046 - 20769.871: 98.9694% ( 4) 00:07:16.581 20769.871 - 20870.695: 99.0110% ( 5) 00:07:16.581 20870.695 - 20971.520: 99.0442% ( 4) 00:07:16.581 20971.520 - 21072.345: 99.0775% ( 4) 00:07:16.581 21072.345 - 21173.169: 99.1190% ( 5) 00:07:16.581 21173.169 - 21273.994: 99.1523% ( 4) 00:07:16.581 21273.994 - 21374.818: 99.1938% ( 5) 00:07:16.581 21374.818 - 21475.643: 99.2271% ( 4) 00:07:16.581 21475.643 - 21576.468: 99.2603% ( 4) 00:07:16.581 21576.468 - 21677.292: 99.3019% ( 5) 00:07:16.582 21677.292 - 21778.117: 99.3351% ( 4) 00:07:16.582 21778.117 - 21878.942: 99.3684% ( 4) 00:07:16.582 21878.942 - 21979.766: 99.4099% ( 5) 00:07:16.582 21979.766 - 22080.591: 99.4432% ( 4) 00:07:16.582 22080.591 - 22181.415: 99.4681% ( 3) 00:07:16.582 27222.646 - 27424.295: 99.4764% ( 1) 00:07:16.582 27424.295 - 27625.945: 99.5761% ( 12) 00:07:16.582 27625.945 - 27827.594: 99.6842% ( 13) 00:07:16.582 27827.594 - 28029.243: 99.7922% ( 13) 00:07:16.582 28029.243 - 28230.892: 99.9003% ( 13) 00:07:16.582 28230.892 - 28432.542: 100.0000% ( 12) 00:07:16.582 00:07:16.582 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:16.582 ============================================================================== 00:07:16.582 Range in us Cumulative IO count 00:07:16.582 5973.858 - 5999.065: 0.0083% ( 1) 00:07:16.582 5999.065 - 6024.271: 0.0249% ( 2) 00:07:16.582 6024.271 - 6049.477: 0.0914% ( 8) 00:07:16.582 6049.477 - 6074.683: 0.1247% ( 4) 00:07:16.582 6074.683 - 6099.889: 0.1330% ( 1) 00:07:16.582 6099.889 - 6125.095: 0.1413% ( 1) 00:07:16.582 6125.095 - 6150.302: 0.1579% ( 2) 00:07:16.582 6175.508 - 6200.714: 0.1662% ( 1) 00:07:16.582 6200.714 - 6225.920: 0.1828% ( 2) 00:07:16.582 6225.920 - 6251.126: 0.1995% ( 2) 00:07:16.582 6251.126 - 6276.332: 0.2161% ( 2) 00:07:16.582 6276.332 - 6301.538: 0.2327% ( 2) 00:07:16.582 6301.538 - 6326.745: 0.3241% ( 11) 00:07:16.582 6326.745 - 6351.951: 0.3906% ( 8) 00:07:16.582 6351.951 - 6377.157: 0.4156% ( 3) 00:07:16.582 6377.157 - 6402.363: 0.4488% ( 4) 00:07:16.582 6402.363 - 6427.569: 0.4987% ( 6) 00:07:16.582 6427.569 - 6452.775: 0.5652% ( 8) 00:07:16.582 6452.775 - 6503.188: 0.7314% ( 20) 00:07:16.582 6503.188 - 6553.600: 0.8893% ( 19) 00:07:16.582 6553.600 - 6604.012: 1.0721% ( 22) 00:07:16.582 6604.012 - 6654.425: 1.2633% ( 23) 00:07:16.582 6654.425 - 6704.837: 1.4877% ( 27) 00:07:16.582 6704.837 - 6755.249: 1.7121% ( 27) 00:07:16.582 6755.249 - 6805.662: 1.9199% ( 25) 00:07:16.582 6805.662 - 6856.074: 2.1858% ( 32) 00:07:16.582 6856.074 - 6906.486: 2.6097% ( 51) 00:07:16.582 6906.486 - 6956.898: 2.9671% ( 43) 00:07:16.582 6956.898 - 7007.311: 3.3660% ( 48) 00:07:16.582 7007.311 - 7057.723: 3.7650% ( 48) 00:07:16.582 7057.723 - 7108.135: 4.2304% ( 56) 00:07:16.582 7108.135 - 7158.548: 4.6792% ( 54) 00:07:16.582 7158.548 - 7208.960: 5.1446% ( 56) 00:07:16.582 7208.960 - 7259.372: 5.6350% ( 59) 00:07:16.582 7259.372 - 7309.785: 6.1752% ( 65) 00:07:16.582 7309.785 - 7360.197: 6.8567% ( 82) 00:07:16.582 7360.197 - 7410.609: 7.5798% ( 87) 00:07:16.582 7410.609 - 7461.022: 8.3610% ( 94) 00:07:16.582 7461.022 - 7511.434: 9.1007% ( 89) 00:07:16.582 7511.434 - 7561.846: 9.7656% ( 80) 00:07:16.582 7561.846 - 7612.258: 10.5136% ( 90) 00:07:16.582 7612.258 - 7662.671: 11.2616% ( 90) 00:07:16.582 7662.671 - 7713.083: 12.0429% ( 94) 00:07:16.582 7713.083 - 7763.495: 12.8241% ( 94) 00:07:16.582 7763.495 - 7813.908: 13.7051% ( 106) 00:07:16.582 7813.908 - 7864.320: 14.7025% ( 120) 00:07:16.582 7864.320 - 7914.732: 15.7995% ( 132) 00:07:16.582 7914.732 - 7965.145: 16.9465% ( 138) 00:07:16.582 7965.145 - 8015.557: 18.1682% ( 147) 00:07:16.582 8015.557 - 8065.969: 19.4731% ( 157) 00:07:16.582 8065.969 - 8116.382: 20.7281% ( 151) 00:07:16.582 8116.382 - 8166.794: 22.0911% ( 164) 00:07:16.582 8166.794 - 8217.206: 23.3710% ( 154) 00:07:16.582 8217.206 - 8267.618: 24.6842% ( 158) 00:07:16.582 8267.618 - 8318.031: 26.0057% ( 159) 00:07:16.582 8318.031 - 8368.443: 27.2108% ( 145) 00:07:16.582 8368.443 - 8418.855: 28.4824% ( 153) 00:07:16.582 8418.855 - 8469.268: 29.7623% ( 154) 00:07:16.582 8469.268 - 8519.680: 31.1087% ( 162) 00:07:16.582 8519.680 - 8570.092: 32.3471% ( 149) 00:07:16.582 8570.092 - 8620.505: 33.6270% ( 154) 00:07:16.582 8620.505 - 8670.917: 34.8570% ( 148) 00:07:16.582 8670.917 - 8721.329: 36.1951% ( 161) 00:07:16.582 8721.329 - 8771.742: 37.2922% ( 132) 00:07:16.582 8771.742 - 8822.154: 38.3477% ( 127) 00:07:16.582 8822.154 - 8872.566: 39.4614% ( 134) 00:07:16.582 8872.566 - 8922.978: 40.4671% ( 121) 00:07:16.582 8922.978 - 8973.391: 41.3896% ( 111) 00:07:16.582 8973.391 - 9023.803: 42.2291% ( 101) 00:07:16.582 9023.803 - 9074.215: 42.9771% ( 90) 00:07:16.582 9074.215 - 9124.628: 43.6253% ( 78) 00:07:16.582 9124.628 - 9175.040: 44.2736% ( 78) 00:07:16.582 9175.040 - 9225.452: 44.8720% ( 72) 00:07:16.582 9225.452 - 9275.865: 45.4787% ( 73) 00:07:16.582 9275.865 - 9326.277: 46.0522% ( 69) 00:07:16.582 9326.277 - 9376.689: 46.6257% ( 69) 00:07:16.582 9376.689 - 9427.102: 47.1825% ( 67) 00:07:16.582 9427.102 - 9477.514: 47.7809% ( 72) 00:07:16.582 9477.514 - 9527.926: 48.3710% ( 71) 00:07:16.582 9527.926 - 9578.338: 48.8364% ( 56) 00:07:16.582 9578.338 - 9628.751: 49.3102% ( 57) 00:07:16.582 9628.751 - 9679.163: 49.7257% ( 50) 00:07:16.582 9679.163 - 9729.575: 50.1745% ( 54) 00:07:16.582 9729.575 - 9779.988: 50.7148% ( 65) 00:07:16.582 9779.988 - 9830.400: 51.1386% ( 51) 00:07:16.582 9830.400 - 9880.812: 51.6124% ( 57) 00:07:16.582 9880.812 - 9931.225: 52.1443% ( 64) 00:07:16.582 9931.225 - 9981.637: 52.6014% ( 55) 00:07:16.582 9981.637 - 10032.049: 53.1167% ( 62) 00:07:16.582 10032.049 - 10082.462: 53.5987% ( 58) 00:07:16.582 10082.462 - 10132.874: 54.0642% ( 56) 00:07:16.582 10132.874 - 10183.286: 54.5296% ( 56) 00:07:16.582 10183.286 - 10233.698: 54.9784% ( 54) 00:07:16.582 10233.698 - 10284.111: 55.4189% ( 53) 00:07:16.582 10284.111 - 10334.523: 55.8428% ( 51) 00:07:16.582 10334.523 - 10384.935: 56.2832% ( 53) 00:07:16.582 10384.935 - 10435.348: 56.7154% ( 52) 00:07:16.582 10435.348 - 10485.760: 57.1061% ( 47) 00:07:16.582 10485.760 - 10536.172: 57.5632% ( 55) 00:07:16.582 10536.172 - 10586.585: 57.9704% ( 49) 00:07:16.582 10586.585 - 10636.997: 58.4192% ( 54) 00:07:16.582 10636.997 - 10687.409: 58.7932% ( 45) 00:07:16.582 10687.409 - 10737.822: 59.2171% ( 51) 00:07:16.582 10737.822 - 10788.234: 59.6576% ( 53) 00:07:16.582 10788.234 - 10838.646: 60.1479% ( 59) 00:07:16.582 10838.646 - 10889.058: 60.5801% ( 52) 00:07:16.582 10889.058 - 10939.471: 61.0871% ( 61) 00:07:16.582 10939.471 - 10989.883: 61.5525% ( 56) 00:07:16.582 10989.883 - 11040.295: 62.0346% ( 58) 00:07:16.582 11040.295 - 11090.708: 62.5000% ( 56) 00:07:16.582 11090.708 - 11141.120: 62.9571% ( 55) 00:07:16.582 11141.120 - 11191.532: 63.4309% ( 57) 00:07:16.582 11191.532 - 11241.945: 63.8298% ( 48) 00:07:16.582 11241.945 - 11292.357: 64.2287% ( 48) 00:07:16.582 11292.357 - 11342.769: 64.7357% ( 61) 00:07:16.582 11342.769 - 11393.182: 65.2344% ( 60) 00:07:16.582 11393.182 - 11443.594: 65.7330% ( 60) 00:07:16.582 11443.594 - 11494.006: 66.2816% ( 66) 00:07:16.582 11494.006 - 11544.418: 66.7803% ( 60) 00:07:16.582 11544.418 - 11594.831: 67.3122% ( 64) 00:07:16.582 11594.831 - 11645.243: 67.8524% ( 65) 00:07:16.582 11645.243 - 11695.655: 68.3178% ( 56) 00:07:16.582 11695.655 - 11746.068: 68.7749% ( 55) 00:07:16.582 11746.068 - 11796.480: 69.1988% ( 51) 00:07:16.582 11796.480 - 11846.892: 69.6642% ( 56) 00:07:16.582 11846.892 - 11897.305: 70.1546% ( 59) 00:07:16.582 11897.305 - 11947.717: 70.6616% ( 61) 00:07:16.582 11947.717 - 11998.129: 71.1519% ( 59) 00:07:16.582 11998.129 - 12048.542: 71.6257% ( 57) 00:07:16.582 12048.542 - 12098.954: 72.1659% ( 65) 00:07:16.582 12098.954 - 12149.366: 72.6313% ( 56) 00:07:16.582 12149.366 - 12199.778: 73.1300% ( 60) 00:07:16.582 12199.778 - 12250.191: 73.4957% ( 44) 00:07:16.582 12250.191 - 12300.603: 73.8614% ( 44) 00:07:16.582 12300.603 - 12351.015: 74.2104% ( 42) 00:07:16.582 12351.015 - 12401.428: 74.6343% ( 51) 00:07:16.582 12401.428 - 12451.840: 75.0332% ( 48) 00:07:16.582 12451.840 - 12502.252: 75.4156% ( 46) 00:07:16.582 12502.252 - 12552.665: 75.8311% ( 50) 00:07:16.582 12552.665 - 12603.077: 76.1636% ( 40) 00:07:16.582 12603.077 - 12653.489: 76.5542% ( 47) 00:07:16.582 12653.489 - 12703.902: 76.8700% ( 38) 00:07:16.582 12703.902 - 12754.314: 77.2606% ( 47) 00:07:16.582 12754.314 - 12804.726: 77.6596% ( 48) 00:07:16.582 12804.726 - 12855.138: 77.9671% ( 37) 00:07:16.582 12855.138 - 12905.551: 78.2580% ( 35) 00:07:16.582 12905.551 - 13006.375: 78.9146% ( 79) 00:07:16.582 13006.375 - 13107.200: 79.6210% ( 85) 00:07:16.582 13107.200 - 13208.025: 80.2693% ( 78) 00:07:16.582 13208.025 - 13308.849: 80.8261% ( 67) 00:07:16.582 13308.849 - 13409.674: 81.3414% ( 62) 00:07:16.582 13409.674 - 13510.498: 81.8983% ( 67) 00:07:16.582 13510.498 - 13611.323: 82.4136% ( 62) 00:07:16.582 13611.323 - 13712.148: 82.8707% ( 55) 00:07:16.582 13712.148 - 13812.972: 83.2613% ( 47) 00:07:16.582 13812.972 - 13913.797: 83.6519% ( 47) 00:07:16.582 13913.797 - 14014.622: 83.9844% ( 40) 00:07:16.582 14014.622 - 14115.446: 84.2670% ( 34) 00:07:16.582 14115.446 - 14216.271: 84.6160% ( 42) 00:07:16.582 14216.271 - 14317.095: 85.0731% ( 55) 00:07:16.582 14317.095 - 14417.920: 85.5552% ( 58) 00:07:16.582 14417.920 - 14518.745: 86.0455% ( 59) 00:07:16.582 14518.745 - 14619.569: 86.4777% ( 52) 00:07:16.582 14619.569 - 14720.394: 87.0180% ( 65) 00:07:16.582 14720.394 - 14821.218: 87.6330% ( 74) 00:07:16.582 14821.218 - 14922.043: 88.2314% ( 72) 00:07:16.583 14922.043 - 15022.868: 88.8298% ( 72) 00:07:16.583 15022.868 - 15123.692: 89.4781% ( 78) 00:07:16.583 15123.692 - 15224.517: 90.0515% ( 69) 00:07:16.583 15224.517 - 15325.342: 90.5751% ( 63) 00:07:16.583 15325.342 - 15426.166: 91.1070% ( 64) 00:07:16.583 15426.166 - 15526.991: 91.6057% ( 60) 00:07:16.583 15526.991 - 15627.815: 92.0961% ( 59) 00:07:16.583 15627.815 - 15728.640: 92.6612% ( 68) 00:07:16.583 15728.640 - 15829.465: 93.1848% ( 63) 00:07:16.583 15829.465 - 15930.289: 93.6336% ( 54) 00:07:16.583 15930.289 - 16031.114: 94.0575% ( 51) 00:07:16.583 16031.114 - 16131.938: 94.5396% ( 58) 00:07:16.583 16131.938 - 16232.763: 94.9801% ( 53) 00:07:16.583 16232.763 - 16333.588: 95.4205% ( 53) 00:07:16.583 16333.588 - 16434.412: 95.7862% ( 44) 00:07:16.583 16434.412 - 16535.237: 96.0938% ( 37) 00:07:16.583 16535.237 - 16636.062: 96.3514% ( 31) 00:07:16.583 16636.062 - 16736.886: 96.6506% ( 36) 00:07:16.583 16736.886 - 16837.711: 96.9332% ( 34) 00:07:16.583 16837.711 - 16938.535: 97.1576% ( 27) 00:07:16.583 16938.535 - 17039.360: 97.3155% ( 19) 00:07:16.583 17039.360 - 17140.185: 97.4402% ( 15) 00:07:16.583 17140.185 - 17241.009: 97.5482% ( 13) 00:07:16.583 17241.009 - 17341.834: 97.5898% ( 5) 00:07:16.583 17341.834 - 17442.658: 97.6396% ( 6) 00:07:16.583 17442.658 - 17543.483: 97.7144% ( 9) 00:07:16.583 17543.483 - 17644.308: 97.7975% ( 10) 00:07:16.583 17644.308 - 17745.132: 97.8723% ( 9) 00:07:16.583 17745.132 - 17845.957: 97.9555% ( 10) 00:07:16.583 17845.957 - 17946.782: 98.0303% ( 9) 00:07:16.583 17946.782 - 18047.606: 98.0718% ( 5) 00:07:16.583 18047.606 - 18148.431: 98.1051% ( 4) 00:07:16.583 18148.431 - 18249.255: 98.1300% ( 3) 00:07:16.583 18249.255 - 18350.080: 98.1549% ( 3) 00:07:16.583 18350.080 - 18450.905: 98.1715% ( 2) 00:07:16.583 18450.905 - 18551.729: 98.1965% ( 3) 00:07:16.583 18551.729 - 18652.554: 98.2297% ( 4) 00:07:16.583 18652.554 - 18753.378: 98.2879% ( 7) 00:07:16.583 18753.378 - 18854.203: 98.3461% ( 7) 00:07:16.583 18854.203 - 18955.028: 98.3876% ( 5) 00:07:16.583 18955.028 - 19055.852: 98.4458% ( 7) 00:07:16.583 19055.852 - 19156.677: 98.5206% ( 9) 00:07:16.583 19156.677 - 19257.502: 98.6037% ( 10) 00:07:16.583 19257.502 - 19358.326: 98.6785% ( 9) 00:07:16.583 19358.326 - 19459.151: 98.7450% ( 8) 00:07:16.583 19459.151 - 19559.975: 98.7949% ( 6) 00:07:16.583 19559.975 - 19660.800: 98.8531% ( 7) 00:07:16.583 19660.800 - 19761.625: 98.8946% ( 5) 00:07:16.583 19761.625 - 19862.449: 98.9362% ( 5) 00:07:16.583 20669.046 - 20769.871: 98.9694% ( 4) 00:07:16.583 20769.871 - 20870.695: 99.0110% ( 5) 00:07:16.583 20870.695 - 20971.520: 99.0442% ( 4) 00:07:16.583 20971.520 - 21072.345: 99.0858% ( 5) 00:07:16.583 21072.345 - 21173.169: 99.1190% ( 4) 00:07:16.583 21173.169 - 21273.994: 99.1606% ( 5) 00:07:16.583 21273.994 - 21374.818: 99.1938% ( 4) 00:07:16.583 21374.818 - 21475.643: 99.2354% ( 5) 00:07:16.583 21475.643 - 21576.468: 99.2686% ( 4) 00:07:16.583 21576.468 - 21677.292: 99.3019% ( 4) 00:07:16.583 21677.292 - 21778.117: 99.3434% ( 5) 00:07:16.583 21778.117 - 21878.942: 99.3767% ( 4) 00:07:16.583 21878.942 - 21979.766: 99.4182% ( 5) 00:07:16.583 21979.766 - 22080.591: 99.4432% ( 3) 00:07:16.583 22080.591 - 22181.415: 99.4681% ( 3) 00:07:16.583 27625.945 - 27827.594: 99.4764% ( 1) 00:07:16.583 27827.594 - 28029.243: 99.5429% ( 8) 00:07:16.583 28029.243 - 28230.892: 99.6011% ( 7) 00:07:16.583 28230.892 - 28432.542: 99.6759% ( 9) 00:07:16.583 28432.542 - 28634.191: 99.7424% ( 8) 00:07:16.583 28634.191 - 28835.840: 99.8421% ( 12) 00:07:16.583 28835.840 - 29037.489: 99.9501% ( 13) 00:07:16.583 29037.489 - 29239.138: 100.0000% ( 6) 00:07:16.583 00:07:16.583 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:16.583 ============================================================================== 00:07:16.583 Range in us Cumulative IO count 00:07:16.583 6049.477 - 6074.683: 0.0083% ( 1) 00:07:16.583 6074.683 - 6099.889: 0.0249% ( 2) 00:07:16.583 6099.889 - 6125.095: 0.0416% ( 2) 00:07:16.583 6125.095 - 6150.302: 0.0582% ( 2) 00:07:16.583 6150.302 - 6175.508: 0.0748% ( 2) 00:07:16.583 6175.508 - 6200.714: 0.0914% ( 2) 00:07:16.583 6200.714 - 6225.920: 0.0997% ( 1) 00:07:16.583 6225.920 - 6251.126: 0.1164% ( 2) 00:07:16.583 6251.126 - 6276.332: 0.1247% ( 1) 00:07:16.583 6276.332 - 6301.538: 0.1662% ( 5) 00:07:16.583 6301.538 - 6326.745: 0.2078% ( 5) 00:07:16.583 6326.745 - 6351.951: 0.2576% ( 6) 00:07:16.583 6351.951 - 6377.157: 0.3075% ( 6) 00:07:16.583 6377.157 - 6402.363: 0.3491% ( 5) 00:07:16.583 6402.363 - 6427.569: 0.3906% ( 5) 00:07:16.583 6427.569 - 6452.775: 0.4654% ( 9) 00:07:16.583 6452.775 - 6503.188: 0.5735% ( 13) 00:07:16.583 6503.188 - 6553.600: 0.7231% ( 18) 00:07:16.583 6553.600 - 6604.012: 0.8893% ( 20) 00:07:16.583 6604.012 - 6654.425: 1.0638% ( 21) 00:07:16.583 6654.425 - 6704.837: 1.2882% ( 27) 00:07:16.583 6704.837 - 6755.249: 1.4960% ( 25) 00:07:16.583 6755.249 - 6805.662: 1.7121% ( 26) 00:07:16.583 6805.662 - 6856.074: 2.0030% ( 35) 00:07:16.583 6856.074 - 6906.486: 2.3438% ( 41) 00:07:16.583 6906.486 - 6956.898: 2.6596% ( 38) 00:07:16.583 6956.898 - 7007.311: 2.9505% ( 35) 00:07:16.583 7007.311 - 7057.723: 3.3078% ( 43) 00:07:16.583 7057.723 - 7108.135: 3.7483% ( 53) 00:07:16.583 7108.135 - 7158.548: 4.3052% ( 67) 00:07:16.583 7158.548 - 7208.960: 4.9535% ( 78) 00:07:16.583 7208.960 - 7259.372: 5.5103% ( 67) 00:07:16.583 7259.372 - 7309.785: 6.1420% ( 76) 00:07:16.583 7309.785 - 7360.197: 6.7819% ( 77) 00:07:16.583 7360.197 - 7410.609: 7.4884% ( 85) 00:07:16.583 7410.609 - 7461.022: 8.2031% ( 86) 00:07:16.583 7461.022 - 7511.434: 8.8514% ( 78) 00:07:16.583 7511.434 - 7561.846: 9.5412% ( 83) 00:07:16.583 7561.846 - 7612.258: 10.3308% ( 95) 00:07:16.583 7612.258 - 7662.671: 11.1287% ( 96) 00:07:16.583 7662.671 - 7713.083: 11.9515% ( 99) 00:07:16.583 7713.083 - 7763.495: 12.8408% ( 107) 00:07:16.583 7763.495 - 7813.908: 13.7301% ( 107) 00:07:16.583 7813.908 - 7864.320: 14.6941% ( 116) 00:07:16.583 7864.320 - 7914.732: 15.7081% ( 122) 00:07:16.583 7914.732 - 7965.145: 16.8384% ( 136) 00:07:16.583 7965.145 - 8015.557: 18.0020% ( 140) 00:07:16.583 8015.557 - 8065.969: 19.3152% ( 158) 00:07:16.583 8065.969 - 8116.382: 20.7447% ( 172) 00:07:16.583 8116.382 - 8166.794: 22.1326% ( 167) 00:07:16.583 8166.794 - 8217.206: 23.5455% ( 170) 00:07:16.583 8217.206 - 8267.618: 24.8587% ( 158) 00:07:16.583 8267.618 - 8318.031: 26.3049% ( 174) 00:07:16.584 8318.031 - 8368.443: 27.6513% ( 162) 00:07:16.584 8368.443 - 8418.855: 28.9644% ( 158) 00:07:16.584 8418.855 - 8469.268: 30.2277% ( 152) 00:07:16.584 8469.268 - 8519.680: 31.5409% ( 158) 00:07:16.584 8519.680 - 8570.092: 32.7543% ( 146) 00:07:16.584 8570.092 - 8620.505: 34.0010% ( 150) 00:07:16.584 8620.505 - 8670.917: 35.2809% ( 154) 00:07:16.584 8670.917 - 8721.329: 36.4943% ( 146) 00:07:16.584 8721.329 - 8771.742: 37.5748% ( 130) 00:07:16.584 8771.742 - 8822.154: 38.5306% ( 115) 00:07:16.584 8822.154 - 8872.566: 39.4199% ( 107) 00:07:16.584 8872.566 - 8922.978: 40.1762% ( 91) 00:07:16.584 8922.978 - 8973.391: 41.1154% ( 113) 00:07:16.584 8973.391 - 9023.803: 41.8717% ( 91) 00:07:16.584 9023.803 - 9074.215: 42.6446% ( 93) 00:07:16.584 9074.215 - 9124.628: 43.2763% ( 76) 00:07:16.584 9124.628 - 9175.040: 43.9661% ( 83) 00:07:16.584 9175.040 - 9225.452: 44.6476% ( 82) 00:07:16.584 9225.452 - 9275.865: 45.3042% ( 79) 00:07:16.584 9275.865 - 9326.277: 46.0605% ( 91) 00:07:16.584 9326.277 - 9376.689: 46.6589% ( 72) 00:07:16.584 9376.689 - 9427.102: 47.2158% ( 67) 00:07:16.584 9427.102 - 9477.514: 47.7560% ( 65) 00:07:16.584 9477.514 - 9527.926: 48.2297% ( 57) 00:07:16.584 9527.926 - 9578.338: 48.6868% ( 55) 00:07:16.584 9578.338 - 9628.751: 49.1855% ( 60) 00:07:16.584 9628.751 - 9679.163: 49.7008% ( 62) 00:07:16.584 9679.163 - 9729.575: 50.1912% ( 59) 00:07:16.584 9729.575 - 9779.988: 50.6400% ( 54) 00:07:16.584 9779.988 - 9830.400: 50.9973% ( 43) 00:07:16.584 9830.400 - 9880.812: 51.3713% ( 45) 00:07:16.584 9880.812 - 9931.225: 51.7370% ( 44) 00:07:16.584 9931.225 - 9981.637: 52.1775% ( 53) 00:07:16.584 9981.637 - 10032.049: 52.6845% ( 61) 00:07:16.584 10032.049 - 10082.462: 53.1666% ( 58) 00:07:16.584 10082.462 - 10132.874: 53.7650% ( 72) 00:07:16.584 10132.874 - 10183.286: 54.3052% ( 65) 00:07:16.584 10183.286 - 10233.698: 54.7955% ( 59) 00:07:16.584 10233.698 - 10284.111: 55.3441% ( 66) 00:07:16.584 10284.111 - 10334.523: 55.9259% ( 70) 00:07:16.584 10334.523 - 10384.935: 56.4578% ( 64) 00:07:16.584 10384.935 - 10435.348: 56.8816% ( 51) 00:07:16.584 10435.348 - 10485.760: 57.3138% ( 52) 00:07:16.584 10485.760 - 10536.172: 57.7876% ( 57) 00:07:16.584 10536.172 - 10586.585: 58.2862% ( 60) 00:07:16.584 10586.585 - 10636.997: 58.8680% ( 70) 00:07:16.584 10636.997 - 10687.409: 59.3833% ( 62) 00:07:16.584 10687.409 - 10737.822: 59.8986% ( 62) 00:07:16.584 10737.822 - 10788.234: 60.4305% ( 64) 00:07:16.584 10788.234 - 10838.646: 60.8710% ( 53) 00:07:16.584 10838.646 - 10889.058: 61.3198% ( 54) 00:07:16.584 10889.058 - 10939.471: 61.7188% ( 48) 00:07:16.584 10939.471 - 10989.883: 62.1759% ( 55) 00:07:16.584 10989.883 - 11040.295: 62.5582% ( 46) 00:07:16.584 11040.295 - 11090.708: 62.9239% ( 44) 00:07:16.584 11090.708 - 11141.120: 63.2812% ( 43) 00:07:16.584 11141.120 - 11191.532: 63.6469% ( 44) 00:07:16.584 11191.532 - 11241.945: 63.9794% ( 40) 00:07:16.584 11241.945 - 11292.357: 64.4365% ( 55) 00:07:16.584 11292.357 - 11342.769: 64.9518% ( 62) 00:07:16.584 11342.769 - 11393.182: 65.4754% ( 63) 00:07:16.584 11393.182 - 11443.594: 65.9325% ( 55) 00:07:16.584 11443.594 - 11494.006: 66.3896% ( 55) 00:07:16.584 11494.006 - 11544.418: 66.8052% ( 50) 00:07:16.584 11544.418 - 11594.831: 67.1127% ( 37) 00:07:16.584 11594.831 - 11645.243: 67.5033% ( 47) 00:07:16.584 11645.243 - 11695.655: 67.9604% ( 55) 00:07:16.584 11695.655 - 11746.068: 68.3926% ( 52) 00:07:16.584 11746.068 - 11796.480: 68.8248% ( 52) 00:07:16.584 11796.480 - 11846.892: 69.3567% ( 64) 00:07:16.584 11846.892 - 11897.305: 69.8138% ( 55) 00:07:16.584 11897.305 - 11947.717: 70.2793% ( 56) 00:07:16.584 11947.717 - 11998.129: 70.7447% ( 56) 00:07:16.584 11998.129 - 12048.542: 71.1852% ( 53) 00:07:16.584 12048.542 - 12098.954: 71.6257% ( 53) 00:07:16.584 12098.954 - 12149.366: 72.0578% ( 52) 00:07:16.584 12149.366 - 12199.778: 72.4651% ( 49) 00:07:16.584 12199.778 - 12250.191: 72.8640% ( 48) 00:07:16.584 12250.191 - 12300.603: 73.2796% ( 50) 00:07:16.584 12300.603 - 12351.015: 73.7284% ( 54) 00:07:16.584 12351.015 - 12401.428: 74.1689% ( 53) 00:07:16.584 12401.428 - 12451.840: 74.6426% ( 57) 00:07:16.584 12451.840 - 12502.252: 75.0665% ( 51) 00:07:16.584 12502.252 - 12552.665: 75.3989% ( 40) 00:07:16.584 12552.665 - 12603.077: 75.7397% ( 41) 00:07:16.584 12603.077 - 12653.489: 76.0971% ( 43) 00:07:16.584 12653.489 - 12703.902: 76.4711% ( 45) 00:07:16.584 12703.902 - 12754.314: 76.8949% ( 51) 00:07:16.584 12754.314 - 12804.726: 77.3354% ( 53) 00:07:16.584 12804.726 - 12855.138: 77.7593% ( 51) 00:07:16.584 12855.138 - 12905.551: 78.1915% ( 52) 00:07:16.584 12905.551 - 13006.375: 78.9977% ( 97) 00:07:16.584 13006.375 - 13107.200: 79.7706% ( 93) 00:07:16.584 13107.200 - 13208.025: 80.5186% ( 90) 00:07:16.584 13208.025 - 13308.849: 81.2916% ( 93) 00:07:16.584 13308.849 - 13409.674: 81.8318% ( 65) 00:07:16.584 13409.674 - 13510.498: 82.3803% ( 66) 00:07:16.584 13510.498 - 13611.323: 82.9122% ( 64) 00:07:16.584 13611.323 - 13712.148: 83.3943% ( 58) 00:07:16.584 13712.148 - 13812.972: 83.7932% ( 48) 00:07:16.584 13812.972 - 13913.797: 84.1922% ( 48) 00:07:16.584 13913.797 - 14014.622: 84.5662% ( 45) 00:07:16.584 14014.622 - 14115.446: 84.8570% ( 35) 00:07:16.584 14115.446 - 14216.271: 85.1064% ( 30) 00:07:16.584 14216.271 - 14317.095: 85.3890% ( 34) 00:07:16.584 14317.095 - 14417.920: 85.7879% ( 48) 00:07:16.584 14417.920 - 14518.745: 86.2699% ( 58) 00:07:16.584 14518.745 - 14619.569: 86.7520% ( 58) 00:07:16.584 14619.569 - 14720.394: 87.2590% ( 61) 00:07:16.584 14720.394 - 14821.218: 87.8491% ( 71) 00:07:16.584 14821.218 - 14922.043: 88.4973% ( 78) 00:07:16.584 14922.043 - 15022.868: 89.1456% ( 78) 00:07:16.584 15022.868 - 15123.692: 89.7856% ( 77) 00:07:16.584 15123.692 - 15224.517: 90.3674% ( 70) 00:07:16.584 15224.517 - 15325.342: 91.0489% ( 82) 00:07:16.584 15325.342 - 15426.166: 91.6140% ( 68) 00:07:16.584 15426.166 - 15526.991: 92.0795% ( 56) 00:07:16.584 15526.991 - 15627.815: 92.5532% ( 57) 00:07:16.584 15627.815 - 15728.640: 92.9189% ( 44) 00:07:16.584 15728.640 - 15829.465: 93.2596% ( 41) 00:07:16.584 15829.465 - 15930.289: 93.5921% ( 40) 00:07:16.584 15930.289 - 16031.114: 93.9328% ( 41) 00:07:16.584 16031.114 - 16131.938: 94.3733% ( 53) 00:07:16.584 16131.938 - 16232.763: 94.7806% ( 49) 00:07:16.584 16232.763 - 16333.588: 95.1130% ( 40) 00:07:16.584 16333.588 - 16434.412: 95.4704% ( 43) 00:07:16.584 16434.412 - 16535.237: 95.8444% ( 45) 00:07:16.584 16535.237 - 16636.062: 96.1935% ( 42) 00:07:16.584 16636.062 - 16736.886: 96.5342% ( 41) 00:07:16.584 16736.886 - 16837.711: 96.8584% ( 39) 00:07:16.584 16837.711 - 16938.535: 97.1825% ( 39) 00:07:16.584 16938.535 - 17039.360: 97.3820% ( 24) 00:07:16.584 17039.360 - 17140.185: 97.5316% ( 18) 00:07:16.584 17140.185 - 17241.009: 97.6562% ( 15) 00:07:16.584 17241.009 - 17341.834: 97.7643% ( 13) 00:07:16.584 17341.834 - 17442.658: 97.8308% ( 8) 00:07:16.584 17442.658 - 17543.483: 97.8640% ( 4) 00:07:16.584 17543.483 - 17644.308: 97.8723% ( 1) 00:07:16.584 17946.782 - 18047.606: 97.8890% ( 2) 00:07:16.584 18047.606 - 18148.431: 97.9222% ( 4) 00:07:16.584 18148.431 - 18249.255: 97.9555% ( 4) 00:07:16.584 18249.255 - 18350.080: 98.0053% ( 6) 00:07:16.584 18350.080 - 18450.905: 98.0801% ( 9) 00:07:16.584 18450.905 - 18551.729: 98.1715% ( 11) 00:07:16.584 18551.729 - 18652.554: 98.2547% ( 10) 00:07:16.584 18652.554 - 18753.378: 98.3461% ( 11) 00:07:16.584 18753.378 - 18854.203: 98.4292% ( 10) 00:07:16.584 18854.203 - 18955.028: 98.5206% ( 11) 00:07:16.584 18955.028 - 19055.852: 98.6037% ( 10) 00:07:16.584 19055.852 - 19156.677: 98.6951% ( 11) 00:07:16.584 19156.677 - 19257.502: 98.7783% ( 10) 00:07:16.584 19257.502 - 19358.326: 98.8531% ( 9) 00:07:16.584 19358.326 - 19459.151: 98.8863% ( 4) 00:07:16.584 19459.151 - 19559.975: 98.9112% ( 3) 00:07:16.584 19559.975 - 19660.800: 98.9362% ( 3) 00:07:16.584 20467.397 - 20568.222: 98.9528% ( 2) 00:07:16.584 20568.222 - 20669.046: 98.9860% ( 4) 00:07:16.584 20669.046 - 20769.871: 99.0193% ( 4) 00:07:16.584 20769.871 - 20870.695: 99.0608% ( 5) 00:07:16.584 20870.695 - 20971.520: 99.0858% ( 3) 00:07:16.584 20971.520 - 21072.345: 99.1273% ( 5) 00:07:16.584 21072.345 - 21173.169: 99.1689% ( 5) 00:07:16.584 21173.169 - 21273.994: 99.2021% ( 4) 00:07:16.584 21273.994 - 21374.818: 99.2437% ( 5) 00:07:16.584 21374.818 - 21475.643: 99.2769% ( 4) 00:07:16.584 21475.643 - 21576.468: 99.3185% ( 5) 00:07:16.584 21576.468 - 21677.292: 99.3517% ( 4) 00:07:16.584 21677.292 - 21778.117: 99.3933% ( 5) 00:07:16.584 21778.117 - 21878.942: 99.4182% ( 3) 00:07:16.584 21878.942 - 21979.766: 99.4598% ( 5) 00:07:16.584 21979.766 - 22080.591: 99.4681% ( 1) 00:07:16.584 27424.295 - 27625.945: 99.5429% ( 9) 00:07:16.584 27625.945 - 27827.594: 99.6509% ( 13) 00:07:16.584 27827.594 - 28029.243: 99.7590% ( 13) 00:07:16.584 28029.243 - 28230.892: 99.8670% ( 13) 00:07:16.584 28230.892 - 28432.542: 99.9751% ( 13) 00:07:16.584 28432.542 - 28634.191: 100.0000% ( 3) 00:07:16.584 00:07:16.584 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:16.584 ============================================================================== 00:07:16.584 Range in us Cumulative IO count 00:07:16.584 6150.302 - 6175.508: 0.0166% ( 2) 00:07:16.584 6175.508 - 6200.714: 0.0332% ( 2) 00:07:16.584 6200.714 - 6225.920: 0.0499% ( 2) 00:07:16.584 6225.920 - 6251.126: 0.0582% ( 1) 00:07:16.584 6251.126 - 6276.332: 0.0748% ( 2) 00:07:16.584 6276.332 - 6301.538: 0.0997% ( 3) 00:07:16.585 6301.538 - 6326.745: 0.1413% ( 5) 00:07:16.585 6326.745 - 6351.951: 0.1745% ( 4) 00:07:16.585 6351.951 - 6377.157: 0.2244% ( 6) 00:07:16.585 6377.157 - 6402.363: 0.2576% ( 4) 00:07:16.585 6402.363 - 6427.569: 0.4072% ( 18) 00:07:16.585 6427.569 - 6452.775: 0.4820% ( 9) 00:07:16.585 6452.775 - 6503.188: 0.5901% ( 13) 00:07:16.585 6503.188 - 6553.600: 0.7148% ( 15) 00:07:16.585 6553.600 - 6604.012: 0.8228% ( 13) 00:07:16.585 6604.012 - 6654.425: 0.9392% ( 14) 00:07:16.585 6654.425 - 6704.837: 1.1054% ( 20) 00:07:16.585 6704.837 - 6755.249: 1.3464% ( 29) 00:07:16.585 6755.249 - 6805.662: 1.5874% ( 29) 00:07:16.585 6805.662 - 6856.074: 1.8783% ( 35) 00:07:16.585 6856.074 - 6906.486: 2.1692% ( 35) 00:07:16.585 6906.486 - 6956.898: 2.6679% ( 60) 00:07:16.585 6956.898 - 7007.311: 3.1084% ( 53) 00:07:16.585 7007.311 - 7057.723: 3.5987% ( 59) 00:07:16.585 7057.723 - 7108.135: 4.0642% ( 56) 00:07:16.585 7108.135 - 7158.548: 4.5213% ( 55) 00:07:16.585 7158.548 - 7208.960: 5.0283% ( 61) 00:07:16.585 7208.960 - 7259.372: 5.6848% ( 79) 00:07:16.585 7259.372 - 7309.785: 6.3082% ( 75) 00:07:16.585 7309.785 - 7360.197: 6.9149% ( 73) 00:07:16.585 7360.197 - 7410.609: 7.6130% ( 84) 00:07:16.585 7410.609 - 7461.022: 8.3444% ( 88) 00:07:16.585 7461.022 - 7511.434: 9.0592% ( 86) 00:07:16.585 7511.434 - 7561.846: 9.7739% ( 86) 00:07:16.585 7561.846 - 7612.258: 10.5303% ( 91) 00:07:16.585 7612.258 - 7662.671: 11.3447% ( 98) 00:07:16.585 7662.671 - 7713.083: 12.1509% ( 97) 00:07:16.585 7713.083 - 7763.495: 13.0070% ( 103) 00:07:16.585 7763.495 - 7813.908: 14.0293% ( 123) 00:07:16.585 7813.908 - 7864.320: 14.9435% ( 110) 00:07:16.585 7864.320 - 7914.732: 15.9242% ( 118) 00:07:16.585 7914.732 - 7965.145: 17.1044% ( 142) 00:07:16.585 7965.145 - 8015.557: 18.2763% ( 141) 00:07:16.585 8015.557 - 8065.969: 19.3235% ( 126) 00:07:16.585 8065.969 - 8116.382: 20.4704% ( 138) 00:07:16.585 8116.382 - 8166.794: 21.8251% ( 163) 00:07:16.585 8166.794 - 8217.206: 23.1383% ( 158) 00:07:16.585 8217.206 - 8267.618: 24.5678% ( 172) 00:07:16.585 8267.618 - 8318.031: 25.9558% ( 167) 00:07:16.585 8318.031 - 8368.443: 27.2856% ( 160) 00:07:16.585 8368.443 - 8418.855: 28.6486% ( 164) 00:07:16.585 8418.855 - 8469.268: 30.0199% ( 165) 00:07:16.585 8469.268 - 8519.680: 31.2916% ( 153) 00:07:16.585 8519.680 - 8570.092: 32.5216% ( 148) 00:07:16.585 8570.092 - 8620.505: 33.7683% ( 150) 00:07:16.585 8620.505 - 8670.917: 34.9817% ( 146) 00:07:16.585 8670.917 - 8721.329: 36.0871% ( 133) 00:07:16.585 8721.329 - 8771.742: 37.1094% ( 123) 00:07:16.585 8771.742 - 8822.154: 38.0236% ( 110) 00:07:16.585 8822.154 - 8872.566: 38.9461% ( 111) 00:07:16.585 8872.566 - 8922.978: 39.8438% ( 108) 00:07:16.585 8922.978 - 8973.391: 40.7746% ( 112) 00:07:16.585 8973.391 - 9023.803: 41.6473% ( 105) 00:07:16.585 9023.803 - 9074.215: 42.4368% ( 95) 00:07:16.585 9074.215 - 9124.628: 43.1848% ( 90) 00:07:16.585 9124.628 - 9175.040: 43.8664% ( 82) 00:07:16.585 9175.040 - 9225.452: 44.4731% ( 73) 00:07:16.585 9225.452 - 9275.865: 45.0881% ( 74) 00:07:16.585 9275.865 - 9326.277: 45.6616% ( 69) 00:07:16.585 9326.277 - 9376.689: 46.3098% ( 78) 00:07:16.585 9376.689 - 9427.102: 46.9332% ( 75) 00:07:16.585 9427.102 - 9477.514: 47.5648% ( 76) 00:07:16.585 9477.514 - 9527.926: 48.1134% ( 66) 00:07:16.585 9527.926 - 9578.338: 48.6453% ( 64) 00:07:16.585 9578.338 - 9628.751: 49.1606% ( 62) 00:07:16.585 9628.751 - 9679.163: 49.6759% ( 62) 00:07:16.585 9679.163 - 9729.575: 50.2327% ( 67) 00:07:16.585 9729.575 - 9779.988: 50.7812% ( 66) 00:07:16.585 9779.988 - 9830.400: 51.2965% ( 62) 00:07:16.585 9830.400 - 9880.812: 51.8035% ( 61) 00:07:16.585 9880.812 - 9931.225: 52.3438% ( 65) 00:07:16.585 9931.225 - 9981.637: 52.8757% ( 64) 00:07:16.585 9981.637 - 10032.049: 53.5073% ( 76) 00:07:16.585 10032.049 - 10082.462: 53.9894% ( 58) 00:07:16.585 10082.462 - 10132.874: 54.5130% ( 63) 00:07:16.585 10132.874 - 10183.286: 55.0532% ( 65) 00:07:16.585 10183.286 - 10233.698: 55.5519% ( 60) 00:07:16.585 10233.698 - 10284.111: 56.0588% ( 61) 00:07:16.585 10284.111 - 10334.523: 56.5908% ( 64) 00:07:16.585 10334.523 - 10384.935: 57.0479% ( 55) 00:07:16.585 10384.935 - 10435.348: 57.5050% ( 55) 00:07:16.585 10435.348 - 10485.760: 57.9538% ( 54) 00:07:16.585 10485.760 - 10536.172: 58.4026% ( 54) 00:07:16.585 10536.172 - 10586.585: 59.0093% ( 73) 00:07:16.585 10586.585 - 10636.997: 59.4747% ( 56) 00:07:16.585 10636.997 - 10687.409: 59.8986% ( 51) 00:07:16.585 10687.409 - 10737.822: 60.2809% ( 46) 00:07:16.585 10737.822 - 10788.234: 60.6965% ( 50) 00:07:16.585 10788.234 - 10838.646: 61.0871% ( 47) 00:07:16.585 10838.646 - 10889.058: 61.5193% ( 52) 00:07:16.585 10889.058 - 10939.471: 61.9764% ( 55) 00:07:16.585 10939.471 - 10989.883: 62.3836% ( 49) 00:07:16.585 10989.883 - 11040.295: 62.8574% ( 57) 00:07:16.585 11040.295 - 11090.708: 63.2729% ( 50) 00:07:16.585 11090.708 - 11141.120: 63.5971% ( 39) 00:07:16.585 11141.120 - 11191.532: 63.9295% ( 40) 00:07:16.585 11191.532 - 11241.945: 64.2703% ( 41) 00:07:16.585 11241.945 - 11292.357: 64.6692% ( 48) 00:07:16.585 11292.357 - 11342.769: 65.1014% ( 52) 00:07:16.585 11342.769 - 11393.182: 65.5502% ( 54) 00:07:16.585 11393.182 - 11443.594: 65.9242% ( 45) 00:07:16.585 11443.594 - 11494.006: 66.2483% ( 39) 00:07:16.585 11494.006 - 11544.418: 66.6307% ( 46) 00:07:16.585 11544.418 - 11594.831: 66.9963% ( 44) 00:07:16.585 11594.831 - 11645.243: 67.3454% ( 42) 00:07:16.585 11645.243 - 11695.655: 67.7360% ( 47) 00:07:16.585 11695.655 - 11746.068: 68.1267% ( 47) 00:07:16.585 11746.068 - 11796.480: 68.5588% ( 52) 00:07:16.585 11796.480 - 11846.892: 68.9578% ( 48) 00:07:16.585 11846.892 - 11897.305: 69.3733% ( 50) 00:07:16.585 11897.305 - 11947.717: 69.8388% ( 56) 00:07:16.585 11947.717 - 11998.129: 70.3541% ( 62) 00:07:16.585 11998.129 - 12048.542: 70.8444% ( 59) 00:07:16.585 12048.542 - 12098.954: 71.3431% ( 60) 00:07:16.585 12098.954 - 12149.366: 71.8085% ( 56) 00:07:16.585 12149.366 - 12199.778: 72.2490% ( 53) 00:07:16.585 12199.778 - 12250.191: 72.6562% ( 49) 00:07:16.585 12250.191 - 12300.603: 73.0635% ( 49) 00:07:16.585 12300.603 - 12351.015: 73.4541% ( 47) 00:07:16.585 12351.015 - 12401.428: 73.8364% ( 46) 00:07:16.585 12401.428 - 12451.840: 74.2271% ( 47) 00:07:16.585 12451.840 - 12502.252: 74.5928% ( 44) 00:07:16.585 12502.252 - 12552.665: 74.9584% ( 44) 00:07:16.585 12552.665 - 12603.077: 75.2909% ( 40) 00:07:16.585 12603.077 - 12653.489: 75.6233% ( 40) 00:07:16.585 12653.489 - 12703.902: 75.9641% ( 41) 00:07:16.585 12703.902 - 12754.314: 76.3132% ( 42) 00:07:16.585 12754.314 - 12804.726: 76.6207% ( 37) 00:07:16.585 12804.726 - 12855.138: 76.9282% ( 37) 00:07:16.585 12855.138 - 12905.551: 77.2274% ( 36) 00:07:16.585 12905.551 - 13006.375: 77.7842% ( 67) 00:07:16.585 13006.375 - 13107.200: 78.4491% ( 80) 00:07:16.585 13107.200 - 13208.025: 79.2221% ( 93) 00:07:16.585 13208.025 - 13308.849: 80.0532% ( 100) 00:07:16.585 13308.849 - 13409.674: 80.9092% ( 103) 00:07:16.585 13409.674 - 13510.498: 81.6157% ( 85) 00:07:16.585 13510.498 - 13611.323: 82.4385% ( 99) 00:07:16.585 13611.323 - 13712.148: 83.1616% ( 87) 00:07:16.585 13712.148 - 13812.972: 83.8348% ( 81) 00:07:16.585 13812.972 - 13913.797: 84.4914% ( 79) 00:07:16.585 13913.797 - 14014.622: 85.2144% ( 87) 00:07:16.585 14014.622 - 14115.446: 85.7962% ( 70) 00:07:16.585 14115.446 - 14216.271: 86.3364% ( 65) 00:07:16.585 14216.271 - 14317.095: 86.8268% ( 59) 00:07:16.585 14317.095 - 14417.920: 87.3504% ( 63) 00:07:16.585 14417.920 - 14518.745: 87.8740% ( 63) 00:07:16.585 14518.745 - 14619.569: 88.4059% ( 64) 00:07:16.585 14619.569 - 14720.394: 88.8797% ( 57) 00:07:16.585 14720.394 - 14821.218: 89.2786% ( 48) 00:07:16.585 14821.218 - 14922.043: 89.7025% ( 51) 00:07:16.585 14922.043 - 15022.868: 90.0432% ( 41) 00:07:16.585 15022.868 - 15123.692: 90.5253% ( 58) 00:07:16.585 15123.692 - 15224.517: 91.0156% ( 59) 00:07:16.585 15224.517 - 15325.342: 91.5060% ( 59) 00:07:16.585 15325.342 - 15426.166: 91.9132% ( 49) 00:07:16.585 15426.166 - 15526.991: 92.2124% ( 36) 00:07:16.585 15526.991 - 15627.815: 92.5199% ( 37) 00:07:16.585 15627.815 - 15728.640: 92.7776% ( 31) 00:07:16.585 15728.640 - 15829.465: 93.1267% ( 42) 00:07:16.585 15829.465 - 15930.289: 93.4757% ( 42) 00:07:16.585 15930.289 - 16031.114: 93.7666% ( 35) 00:07:16.585 16031.114 - 16131.938: 94.0409% ( 33) 00:07:16.585 16131.938 - 16232.763: 94.3567% ( 38) 00:07:16.585 16232.763 - 16333.588: 94.7058% ( 42) 00:07:16.585 16333.588 - 16434.412: 95.1047% ( 48) 00:07:16.585 16434.412 - 16535.237: 95.4621% ( 43) 00:07:16.585 16535.237 - 16636.062: 95.7779% ( 38) 00:07:16.585 16636.062 - 16736.886: 96.1021% ( 39) 00:07:16.585 16736.886 - 16837.711: 96.4096% ( 37) 00:07:16.585 16837.711 - 16938.535: 96.6506% ( 29) 00:07:16.585 16938.535 - 17039.360: 96.8334% ( 22) 00:07:16.585 17039.360 - 17140.185: 97.0578% ( 27) 00:07:16.585 17140.185 - 17241.009: 97.2241% ( 20) 00:07:16.585 17241.009 - 17341.834: 97.3570% ( 16) 00:07:16.585 17341.834 - 17442.658: 97.4568% ( 12) 00:07:16.585 17442.658 - 17543.483: 97.5482% ( 11) 00:07:16.585 17543.483 - 17644.308: 97.5981% ( 6) 00:07:16.585 17644.308 - 17745.132: 97.7311% ( 16) 00:07:16.585 17745.132 - 17845.957: 97.8308% ( 12) 00:07:16.585 17845.957 - 17946.782: 97.9721% ( 17) 00:07:16.585 17946.782 - 18047.606: 98.0967% ( 15) 00:07:16.585 18047.606 - 18148.431: 98.2048% ( 13) 00:07:16.585 18148.431 - 18249.255: 98.3045% ( 12) 00:07:16.585 18249.255 - 18350.080: 98.3793% ( 9) 00:07:16.585 18350.080 - 18450.905: 98.4541% ( 9) 00:07:16.585 18450.905 - 18551.729: 98.5206% ( 8) 00:07:16.585 18551.729 - 18652.554: 98.5871% ( 8) 00:07:16.586 18652.554 - 18753.378: 98.6536% ( 8) 00:07:16.586 18753.378 - 18854.203: 98.7035% ( 6) 00:07:16.586 18854.203 - 18955.028: 98.7616% ( 7) 00:07:16.586 18955.028 - 19055.852: 98.8198% ( 7) 00:07:16.586 19055.852 - 19156.677: 98.8863% ( 8) 00:07:16.586 19156.677 - 19257.502: 98.9362% ( 6) 00:07:16.586 20366.572 - 20467.397: 98.9611% ( 3) 00:07:16.586 20467.397 - 20568.222: 99.0027% ( 5) 00:07:16.586 20568.222 - 20669.046: 99.0359% ( 4) 00:07:16.586 20669.046 - 20769.871: 99.0691% ( 4) 00:07:16.586 20769.871 - 20870.695: 99.1107% ( 5) 00:07:16.586 20870.695 - 20971.520: 99.1439% ( 4) 00:07:16.586 20971.520 - 21072.345: 99.1855% ( 5) 00:07:16.586 21072.345 - 21173.169: 99.2188% ( 4) 00:07:16.586 21173.169 - 21273.994: 99.2603% ( 5) 00:07:16.586 21273.994 - 21374.818: 99.2936% ( 4) 00:07:16.586 21374.818 - 21475.643: 99.3351% ( 5) 00:07:16.586 21475.643 - 21576.468: 99.3684% ( 4) 00:07:16.586 21576.468 - 21677.292: 99.4099% ( 5) 00:07:16.586 21677.292 - 21778.117: 99.4432% ( 4) 00:07:16.586 21778.117 - 21878.942: 99.4681% ( 3) 00:07:16.586 26819.348 - 27020.997: 99.5096% ( 5) 00:07:16.586 27020.997 - 27222.646: 99.6177% ( 13) 00:07:16.586 27222.646 - 27424.295: 99.7174% ( 12) 00:07:16.586 27424.295 - 27625.945: 99.8255% ( 13) 00:07:16.586 27625.945 - 27827.594: 99.9335% ( 13) 00:07:16.586 27827.594 - 28029.243: 100.0000% ( 8) 00:07:16.586 00:07:16.586 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:16.586 ============================================================================== 00:07:16.586 Range in us Cumulative IO count 00:07:16.586 5142.055 - 5167.262: 0.0166% ( 2) 00:07:16.586 5167.262 - 5192.468: 0.0416% ( 3) 00:07:16.586 5192.468 - 5217.674: 0.0665% ( 3) 00:07:16.586 5217.674 - 5242.880: 0.0914% ( 3) 00:07:16.586 5242.880 - 5268.086: 0.1080% ( 2) 00:07:16.586 5268.086 - 5293.292: 0.1164% ( 1) 00:07:16.586 5293.292 - 5318.498: 0.1330% ( 2) 00:07:16.586 5318.498 - 5343.705: 0.1496% ( 2) 00:07:16.586 5343.705 - 5368.911: 0.1662% ( 2) 00:07:16.586 5368.911 - 5394.117: 0.1828% ( 2) 00:07:16.586 5394.117 - 5419.323: 0.1995% ( 2) 00:07:16.586 5419.323 - 5444.529: 0.2244% ( 3) 00:07:16.586 5444.529 - 5469.735: 0.2410% ( 2) 00:07:16.586 5469.735 - 5494.942: 0.2576% ( 2) 00:07:16.586 5494.942 - 5520.148: 0.2743% ( 2) 00:07:16.586 5520.148 - 5545.354: 0.2909% ( 2) 00:07:16.586 5545.354 - 5570.560: 0.3075% ( 2) 00:07:16.586 5570.560 - 5595.766: 0.3241% ( 2) 00:07:16.586 5595.766 - 5620.972: 0.3491% ( 3) 00:07:16.586 5620.972 - 5646.178: 0.3657% ( 2) 00:07:16.586 5646.178 - 5671.385: 0.3823% ( 2) 00:07:16.586 5671.385 - 5696.591: 0.3906% ( 1) 00:07:16.586 5747.003 - 5772.209: 0.4072% ( 2) 00:07:16.586 5772.209 - 5797.415: 0.4239% ( 2) 00:07:16.586 5797.415 - 5822.622: 0.4405% ( 2) 00:07:16.586 5822.622 - 5847.828: 0.4571% ( 2) 00:07:16.586 5847.828 - 5873.034: 0.4737% ( 2) 00:07:16.586 5873.034 - 5898.240: 0.4904% ( 2) 00:07:16.586 5898.240 - 5923.446: 0.5153% ( 3) 00:07:16.586 5923.446 - 5948.652: 0.5236% ( 1) 00:07:16.586 5973.858 - 5999.065: 0.5319% ( 1) 00:07:16.586 6074.683 - 6099.889: 0.5652% ( 4) 00:07:16.586 6099.889 - 6125.095: 0.5735% ( 1) 00:07:16.586 6125.095 - 6150.302: 0.5901% ( 2) 00:07:16.586 6150.302 - 6175.508: 0.6067% ( 2) 00:07:16.586 6175.508 - 6200.714: 0.6233% ( 2) 00:07:16.586 6200.714 - 6225.920: 0.6400% ( 2) 00:07:16.586 6225.920 - 6251.126: 0.6566% ( 2) 00:07:16.586 6251.126 - 6276.332: 0.6732% ( 2) 00:07:16.586 6276.332 - 6301.538: 0.6981% ( 3) 00:07:16.586 6301.538 - 6326.745: 0.7314% ( 4) 00:07:16.586 6326.745 - 6351.951: 0.7646% ( 4) 00:07:16.586 6351.951 - 6377.157: 0.7979% ( 4) 00:07:16.586 6377.157 - 6402.363: 0.8727% ( 9) 00:07:16.586 6402.363 - 6427.569: 0.9475% ( 9) 00:07:16.586 6427.569 - 6452.775: 1.0140% ( 8) 00:07:16.586 6452.775 - 6503.188: 1.1220% ( 13) 00:07:16.586 6503.188 - 6553.600: 1.2467% ( 15) 00:07:16.586 6553.600 - 6604.012: 1.3880% ( 17) 00:07:16.586 6604.012 - 6654.425: 1.5043% ( 14) 00:07:16.586 6654.425 - 6704.837: 1.6290% ( 15) 00:07:16.586 6704.837 - 6755.249: 1.8201% ( 23) 00:07:16.586 6755.249 - 6805.662: 2.0113% ( 23) 00:07:16.586 6805.662 - 6856.074: 2.3188% ( 37) 00:07:16.586 6856.074 - 6906.486: 2.6097% ( 35) 00:07:16.586 6906.486 - 6956.898: 3.0918% ( 58) 00:07:16.586 6956.898 - 7007.311: 3.5821% ( 59) 00:07:16.586 7007.311 - 7057.723: 4.0226% ( 53) 00:07:16.586 7057.723 - 7108.135: 4.4797% ( 55) 00:07:16.586 7108.135 - 7158.548: 4.9202% ( 53) 00:07:16.586 7158.548 - 7208.960: 5.3939% ( 57) 00:07:16.586 7208.960 - 7259.372: 5.9259% ( 64) 00:07:16.586 7259.372 - 7309.785: 6.4328% ( 61) 00:07:16.586 7309.785 - 7360.197: 6.9648% ( 64) 00:07:16.586 7360.197 - 7410.609: 7.5465% ( 70) 00:07:16.586 7410.609 - 7461.022: 8.2447% ( 84) 00:07:16.586 7461.022 - 7511.434: 9.0758% ( 100) 00:07:16.586 7511.434 - 7561.846: 9.8903% ( 98) 00:07:16.586 7561.846 - 7612.258: 10.6217% ( 88) 00:07:16.586 7612.258 - 7662.671: 11.2616% ( 77) 00:07:16.586 7662.671 - 7713.083: 12.0761% ( 98) 00:07:16.586 7713.083 - 7763.495: 12.7909% ( 86) 00:07:16.586 7763.495 - 7813.908: 13.5472% ( 91) 00:07:16.586 7813.908 - 7864.320: 14.4614% ( 110) 00:07:16.586 7864.320 - 7914.732: 15.4422% ( 118) 00:07:16.586 7914.732 - 7965.145: 16.5642% ( 135) 00:07:16.586 7965.145 - 8015.557: 17.7111% ( 138) 00:07:16.586 8015.557 - 8065.969: 18.9328% ( 147) 00:07:16.586 8065.969 - 8116.382: 20.1795% ( 150) 00:07:16.586 8116.382 - 8166.794: 21.4594% ( 154) 00:07:16.586 8166.794 - 8217.206: 22.7643% ( 157) 00:07:16.586 8217.206 - 8267.618: 24.1439% ( 166) 00:07:16.586 8267.618 - 8318.031: 25.4322% ( 155) 00:07:16.586 8318.031 - 8368.443: 26.7703% ( 161) 00:07:16.586 8368.443 - 8418.855: 28.0086% ( 149) 00:07:16.586 8418.855 - 8469.268: 29.2886% ( 154) 00:07:16.586 8469.268 - 8519.680: 30.5768% ( 155) 00:07:16.586 8519.680 - 8570.092: 31.8900% ( 158) 00:07:16.586 8570.092 - 8620.505: 33.2114% ( 159) 00:07:16.586 8620.505 - 8670.917: 34.5578% ( 162) 00:07:16.586 8670.917 - 8721.329: 35.7297% ( 141) 00:07:16.586 8721.329 - 8771.742: 36.6938% ( 116) 00:07:16.586 8771.742 - 8822.154: 37.5831% ( 107) 00:07:16.586 8822.154 - 8872.566: 38.3976% ( 98) 00:07:16.586 8872.566 - 8922.978: 39.1290% ( 88) 00:07:16.586 8922.978 - 8973.391: 40.0515% ( 111) 00:07:16.586 8973.391 - 9023.803: 40.9242% ( 105) 00:07:16.586 9023.803 - 9074.215: 41.7803% ( 103) 00:07:16.586 9074.215 - 9124.628: 42.6031% ( 99) 00:07:16.586 9124.628 - 9175.040: 43.3344% ( 88) 00:07:16.586 9175.040 - 9225.452: 44.0741% ( 89) 00:07:16.586 9225.452 - 9275.865: 44.7723% ( 84) 00:07:16.586 9275.865 - 9326.277: 45.5452% ( 93) 00:07:16.586 9326.277 - 9376.689: 46.2517% ( 85) 00:07:16.586 9376.689 - 9427.102: 46.9166% ( 80) 00:07:16.586 9427.102 - 9477.514: 47.6064% ( 83) 00:07:16.586 9477.514 - 9527.926: 48.2713% ( 80) 00:07:16.586 9527.926 - 9578.338: 48.9611% ( 83) 00:07:16.586 9578.338 - 9628.751: 49.6592% ( 84) 00:07:16.586 9628.751 - 9679.163: 50.2410% ( 70) 00:07:16.586 9679.163 - 9729.575: 50.8477% ( 73) 00:07:16.586 9729.575 - 9779.988: 51.3713% ( 63) 00:07:16.586 9779.988 - 9830.400: 51.9033% ( 64) 00:07:16.586 9830.400 - 9880.812: 52.4435% ( 65) 00:07:16.586 9880.812 - 9931.225: 52.9754% ( 64) 00:07:16.586 9931.225 - 9981.637: 53.5572% ( 70) 00:07:16.586 9981.637 - 10032.049: 54.1140% ( 67) 00:07:16.586 10032.049 - 10082.462: 54.6293% ( 62) 00:07:16.586 10082.462 - 10132.874: 55.0781% ( 54) 00:07:16.586 10132.874 - 10183.286: 55.5352% ( 55) 00:07:16.586 10183.286 - 10233.698: 56.1420% ( 73) 00:07:16.586 10233.698 - 10284.111: 56.5741% ( 52) 00:07:16.586 10284.111 - 10334.523: 57.0063% ( 52) 00:07:16.586 10334.523 - 10384.935: 57.3720% ( 44) 00:07:16.586 10384.935 - 10435.348: 57.7543% ( 46) 00:07:16.586 10435.348 - 10485.760: 58.1283% ( 45) 00:07:16.586 10485.760 - 10536.172: 58.5854% ( 55) 00:07:16.586 10536.172 - 10586.585: 59.0342% ( 54) 00:07:16.586 10586.585 - 10636.997: 59.4332% ( 48) 00:07:16.586 10636.997 - 10687.409: 59.8238% ( 47) 00:07:16.586 10687.409 - 10737.822: 60.2227% ( 48) 00:07:16.586 10737.822 - 10788.234: 60.5053% ( 34) 00:07:16.587 10788.234 - 10838.646: 60.7962% ( 35) 00:07:16.587 10838.646 - 10889.058: 61.1203% ( 39) 00:07:16.587 10889.058 - 10939.471: 61.4112% ( 35) 00:07:16.587 10939.471 - 10989.883: 61.6938% ( 34) 00:07:16.587 10989.883 - 11040.295: 62.0346% ( 41) 00:07:16.587 11040.295 - 11090.708: 62.3920% ( 43) 00:07:16.587 11090.708 - 11141.120: 62.8158% ( 51) 00:07:16.587 11141.120 - 11191.532: 63.3311% ( 62) 00:07:16.587 11191.532 - 11241.945: 63.8464% ( 62) 00:07:16.587 11241.945 - 11292.357: 64.3368% ( 59) 00:07:16.587 11292.357 - 11342.769: 64.7523% ( 50) 00:07:16.587 11342.769 - 11393.182: 65.2344% ( 58) 00:07:16.587 11393.182 - 11443.594: 65.7164% ( 58) 00:07:16.587 11443.594 - 11494.006: 66.1985% ( 58) 00:07:16.587 11494.006 - 11544.418: 66.6805% ( 58) 00:07:16.587 11544.418 - 11594.831: 67.1210% ( 53) 00:07:16.587 11594.831 - 11645.243: 67.5116% ( 47) 00:07:16.587 11645.243 - 11695.655: 67.9688% ( 55) 00:07:16.587 11695.655 - 11746.068: 68.4342% ( 56) 00:07:16.587 11746.068 - 11796.480: 68.8664% ( 52) 00:07:16.587 11796.480 - 11846.892: 69.2985% ( 52) 00:07:16.587 11846.892 - 11897.305: 69.7806% ( 58) 00:07:16.587 11897.305 - 11947.717: 70.2045% ( 51) 00:07:16.587 11947.717 - 11998.129: 70.5951% ( 47) 00:07:16.587 11998.129 - 12048.542: 70.9358% ( 41) 00:07:16.587 12048.542 - 12098.954: 71.5093% ( 69) 00:07:16.587 12098.954 - 12149.366: 71.9664% ( 55) 00:07:16.587 12149.366 - 12199.778: 72.3487% ( 46) 00:07:16.587 12199.778 - 12250.191: 72.7726% ( 51) 00:07:16.587 12250.191 - 12300.603: 73.0967% ( 39) 00:07:16.587 12300.603 - 12351.015: 73.4874% ( 47) 00:07:16.587 12351.015 - 12401.428: 73.8946% ( 49) 00:07:16.587 12401.428 - 12451.840: 74.2936% ( 48) 00:07:16.587 12451.840 - 12502.252: 74.6343% ( 41) 00:07:16.587 12502.252 - 12552.665: 74.9501% ( 38) 00:07:16.587 12552.665 - 12603.077: 75.3324% ( 46) 00:07:16.587 12603.077 - 12653.489: 75.6815% ( 42) 00:07:16.587 12653.489 - 12703.902: 76.0638% ( 46) 00:07:16.587 12703.902 - 12754.314: 76.4129% ( 42) 00:07:16.587 12754.314 - 12804.726: 76.7952% ( 46) 00:07:16.587 12804.726 - 12855.138: 77.1193% ( 39) 00:07:16.587 12855.138 - 12905.551: 77.4435% ( 39) 00:07:16.587 12905.551 - 13006.375: 78.0751% ( 76) 00:07:16.587 13006.375 - 13107.200: 78.7566% ( 82) 00:07:16.587 13107.200 - 13208.025: 79.4714% ( 86) 00:07:16.587 13208.025 - 13308.849: 80.3191% ( 102) 00:07:16.587 13308.849 - 13409.674: 81.2084% ( 107) 00:07:16.587 13409.674 - 13510.498: 82.0146% ( 97) 00:07:16.587 13510.498 - 13611.323: 82.7294% ( 86) 00:07:16.587 13611.323 - 13712.148: 83.4857% ( 91) 00:07:16.587 13712.148 - 13812.972: 84.2753% ( 95) 00:07:16.587 13812.972 - 13913.797: 85.0150% ( 89) 00:07:16.587 13913.797 - 14014.622: 85.7962% ( 94) 00:07:16.587 14014.622 - 14115.446: 86.4777% ( 82) 00:07:16.587 14115.446 - 14216.271: 87.0844% ( 73) 00:07:16.587 14216.271 - 14317.095: 87.5499% ( 56) 00:07:16.587 14317.095 - 14417.920: 88.0319% ( 58) 00:07:16.587 14417.920 - 14518.745: 88.4641% ( 52) 00:07:16.587 14518.745 - 14619.569: 88.8132% ( 42) 00:07:16.587 14619.569 - 14720.394: 89.1041% ( 35) 00:07:16.587 14720.394 - 14821.218: 89.3451% ( 29) 00:07:16.587 14821.218 - 14922.043: 89.5612% ( 26) 00:07:16.587 14922.043 - 15022.868: 89.6941% ( 16) 00:07:16.587 15022.868 - 15123.692: 89.8604% ( 20) 00:07:16.587 15123.692 - 15224.517: 90.1762% ( 38) 00:07:16.587 15224.517 - 15325.342: 90.6250% ( 54) 00:07:16.587 15325.342 - 15426.166: 91.0406% ( 50) 00:07:16.587 15426.166 - 15526.991: 91.4312% ( 47) 00:07:16.587 15526.991 - 15627.815: 91.8218% ( 47) 00:07:16.587 15627.815 - 15728.640: 92.2124% ( 47) 00:07:16.587 15728.640 - 15829.465: 92.6529% ( 53) 00:07:16.587 15829.465 - 15930.289: 93.0768% ( 51) 00:07:16.587 15930.289 - 16031.114: 93.5090% ( 52) 00:07:16.587 16031.114 - 16131.938: 93.8830% ( 45) 00:07:16.587 16131.938 - 16232.763: 94.1905% ( 37) 00:07:16.587 16232.763 - 16333.588: 94.4814% ( 35) 00:07:16.587 16333.588 - 16434.412: 94.8138% ( 40) 00:07:16.587 16434.412 - 16535.237: 95.1130% ( 36) 00:07:16.587 16535.237 - 16636.062: 95.4205% ( 37) 00:07:16.587 16636.062 - 16736.886: 95.7281% ( 37) 00:07:16.587 16736.886 - 16837.711: 96.0522% ( 39) 00:07:16.587 16837.711 - 16938.535: 96.2932% ( 29) 00:07:16.587 16938.535 - 17039.360: 96.5093% ( 26) 00:07:16.587 17039.360 - 17140.185: 96.7503% ( 29) 00:07:16.587 17140.185 - 17241.009: 96.9166% ( 20) 00:07:16.587 17241.009 - 17341.834: 97.0994% ( 22) 00:07:16.587 17341.834 - 17442.658: 97.2407% ( 17) 00:07:16.587 17442.658 - 17543.483: 97.4069% ( 20) 00:07:16.587 17543.483 - 17644.308: 97.5648% ( 19) 00:07:16.587 17644.308 - 17745.132: 97.7061% ( 17) 00:07:16.587 17745.132 - 17845.957: 97.9056% ( 24) 00:07:16.587 17845.957 - 17946.782: 98.0967% ( 23) 00:07:16.587 17946.782 - 18047.606: 98.2713% ( 21) 00:07:16.587 18047.606 - 18148.431: 98.4458% ( 21) 00:07:16.587 18148.431 - 18249.255: 98.6120% ( 20) 00:07:16.587 18249.255 - 18350.080: 98.7367% ( 15) 00:07:16.587 18350.080 - 18450.905: 98.7949% ( 7) 00:07:16.587 18450.905 - 18551.729: 98.8447% ( 6) 00:07:16.587 18551.729 - 18652.554: 98.9029% ( 7) 00:07:16.587 18652.554 - 18753.378: 98.9362% ( 4) 00:07:16.587 20164.923 - 20265.748: 98.9528% ( 2) 00:07:16.587 20265.748 - 20366.572: 98.9860% ( 4) 00:07:16.587 20366.572 - 20467.397: 99.0276% ( 5) 00:07:16.587 20467.397 - 20568.222: 99.0691% ( 5) 00:07:16.587 20568.222 - 20669.046: 99.1024% ( 4) 00:07:16.587 20669.046 - 20769.871: 99.1356% ( 4) 00:07:16.587 20769.871 - 20870.695: 99.1689% ( 4) 00:07:16.587 20870.695 - 20971.520: 99.2104% ( 5) 00:07:16.587 20971.520 - 21072.345: 99.2437% ( 4) 00:07:16.587 21072.345 - 21173.169: 99.2852% ( 5) 00:07:16.587 21173.169 - 21273.994: 99.3185% ( 4) 00:07:16.587 21273.994 - 21374.818: 99.3600% ( 5) 00:07:16.587 21374.818 - 21475.643: 99.3933% ( 4) 00:07:16.587 21475.643 - 21576.468: 99.4348% ( 5) 00:07:16.587 21576.468 - 21677.292: 99.4598% ( 3) 00:07:16.587 21677.292 - 21778.117: 99.4681% ( 1) 00:07:16.587 26214.400 - 26416.049: 99.5013% ( 4) 00:07:16.587 26416.049 - 26617.698: 99.6011% ( 12) 00:07:16.587 26617.698 - 26819.348: 99.7008% ( 12) 00:07:16.587 26819.348 - 27020.997: 99.8005% ( 12) 00:07:16.587 27020.997 - 27222.646: 99.9003% ( 12) 00:07:16.587 27222.646 - 27424.295: 100.0000% ( 12) 00:07:16.587 00:07:16.587 02:54:32 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:17.561 Initializing NVMe Controllers 00:07:17.561 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:17.561 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:17.561 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:17.561 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:17.561 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:17.561 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:17.561 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:17.561 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:17.561 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:17.561 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:17.561 Initialization complete. Launching workers. 00:07:17.561 ======================================================== 00:07:17.561 Latency(us) 00:07:17.561 Device Information : IOPS MiB/s Average min max 00:07:17.561 PCIE (0000:00:10.0) NSID 1 from core 0: 11608.54 136.04 11031.36 5851.13 32226.25 00:07:17.561 PCIE (0000:00:11.0) NSID 1 from core 0: 11608.54 136.04 11023.45 5811.89 31904.45 00:07:17.561 PCIE (0000:00:13.0) NSID 1 from core 0: 11608.54 136.04 11014.52 4726.99 33469.17 00:07:17.561 PCIE (0000:00:12.0) NSID 1 from core 0: 11608.54 136.04 11004.89 4436.36 33394.05 00:07:17.561 PCIE (0000:00:12.0) NSID 2 from core 0: 11608.54 136.04 10995.56 4121.31 33219.83 00:07:17.561 PCIE (0000:00:12.0) NSID 3 from core 0: 11672.33 136.79 10926.23 3810.86 25335.18 00:07:17.561 ======================================================== 00:07:17.561 Total : 69715.04 816.97 10999.27 3810.86 33469.17 00:07:17.561 00:07:17.561 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:17.561 ================================================================================= 00:07:17.561 1.00000% : 6452.775us 00:07:17.561 10.00000% : 8166.794us 00:07:17.561 25.00000% : 9427.102us 00:07:17.561 50.00000% : 10687.409us 00:07:17.561 75.00000% : 12149.366us 00:07:17.561 90.00000% : 14317.095us 00:07:17.561 95.00000% : 15526.991us 00:07:17.561 98.00000% : 17039.360us 00:07:17.561 99.00000% : 22887.188us 00:07:17.561 99.50000% : 30650.683us 00:07:17.561 99.90000% : 32062.228us 00:07:17.561 99.99000% : 32263.877us 00:07:17.561 99.99900% : 32263.877us 00:07:17.561 99.99990% : 32263.877us 00:07:17.561 99.99999% : 32263.877us 00:07:17.561 00:07:17.561 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:17.561 ================================================================================= 00:07:17.561 1.00000% : 6427.569us 00:07:17.561 10.00000% : 8267.618us 00:07:17.561 25.00000% : 9427.102us 00:07:17.561 50.00000% : 10687.409us 00:07:17.561 75.00000% : 12250.191us 00:07:17.561 90.00000% : 14317.095us 00:07:17.561 95.00000% : 15627.815us 00:07:17.561 98.00000% : 16938.535us 00:07:17.561 99.00000% : 22685.538us 00:07:17.561 99.50000% : 30650.683us 00:07:17.561 99.90000% : 31658.929us 00:07:17.561 99.99000% : 32062.228us 00:07:17.561 99.99900% : 32062.228us 00:07:17.561 99.99990% : 32062.228us 00:07:17.561 99.99999% : 32062.228us 00:07:17.561 00:07:17.561 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:17.561 ================================================================================= 00:07:17.561 1.00000% : 6377.157us 00:07:17.561 10.00000% : 7914.732us 00:07:17.561 25.00000% : 9376.689us 00:07:17.561 50.00000% : 10737.822us 00:07:17.561 75.00000% : 12149.366us 00:07:17.561 90.00000% : 14014.622us 00:07:17.561 95.00000% : 15930.289us 00:07:17.561 98.00000% : 17039.360us 00:07:17.561 99.00000% : 23794.609us 00:07:17.561 99.50000% : 31860.578us 00:07:17.561 99.90000% : 33272.123us 00:07:17.561 99.99000% : 33473.772us 00:07:17.561 99.99900% : 33473.772us 00:07:17.561 99.99990% : 33473.772us 00:07:17.561 99.99999% : 33473.772us 00:07:17.561 00:07:17.561 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:17.561 ================================================================================= 00:07:17.561 1.00000% : 6402.363us 00:07:17.561 10.00000% : 7965.145us 00:07:17.561 25.00000% : 9376.689us 00:07:17.561 50.00000% : 10838.646us 00:07:17.561 75.00000% : 12149.366us 00:07:17.561 90.00000% : 13913.797us 00:07:17.561 95.00000% : 15930.289us 00:07:17.561 98.00000% : 17039.360us 00:07:17.561 99.00000% : 23391.311us 00:07:17.561 99.50000% : 32062.228us 00:07:17.561 99.90000% : 33272.123us 00:07:17.561 99.99000% : 33473.772us 00:07:17.561 99.99900% : 33473.772us 00:07:17.561 99.99990% : 33473.772us 00:07:17.561 99.99999% : 33473.772us 00:07:17.561 00:07:17.561 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:17.561 ================================================================================= 00:07:17.561 1.00000% : 6427.569us 00:07:17.561 10.00000% : 8065.969us 00:07:17.561 25.00000% : 9376.689us 00:07:17.561 50.00000% : 10737.822us 00:07:17.561 75.00000% : 12149.366us 00:07:17.561 90.00000% : 14014.622us 00:07:17.561 95.00000% : 15627.815us 00:07:17.561 98.00000% : 17039.360us 00:07:17.561 99.00000% : 23693.785us 00:07:17.561 99.50000% : 31860.578us 00:07:17.561 99.90000% : 33070.474us 00:07:17.561 99.99000% : 33272.123us 00:07:17.561 99.99900% : 33272.123us 00:07:17.561 99.99990% : 33272.123us 00:07:17.561 99.99999% : 33272.123us 00:07:17.561 00:07:17.561 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:17.561 ================================================================================= 00:07:17.561 1.00000% : 6427.569us 00:07:17.561 10.00000% : 8217.206us 00:07:17.561 25.00000% : 9376.689us 00:07:17.561 50.00000% : 10687.409us 00:07:17.561 75.00000% : 12098.954us 00:07:17.561 90.00000% : 14317.095us 00:07:17.561 95.00000% : 15627.815us 00:07:17.561 98.00000% : 16837.711us 00:07:17.561 99.00000% : 17543.483us 00:07:17.561 99.50000% : 24097.083us 00:07:17.561 99.90000% : 25105.329us 00:07:17.561 99.99000% : 25306.978us 00:07:17.561 99.99900% : 25407.803us 00:07:17.561 99.99990% : 25407.803us 00:07:17.561 99.99999% : 25407.803us 00:07:17.561 00:07:17.561 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:17.561 ============================================================================== 00:07:17.561 Range in us Cumulative IO count 00:07:17.561 5847.828 - 5873.034: 0.0086% ( 1) 00:07:17.561 5873.034 - 5898.240: 0.0172% ( 1) 00:07:17.561 6024.271 - 6049.477: 0.0258% ( 1) 00:07:17.561 6150.302 - 6175.508: 0.0429% ( 2) 00:07:17.562 6175.508 - 6200.714: 0.1030% ( 7) 00:07:17.562 6200.714 - 6225.920: 0.1545% ( 6) 00:07:17.562 6225.920 - 6251.126: 0.1975% ( 5) 00:07:17.562 6251.126 - 6276.332: 0.2232% ( 3) 00:07:17.562 6276.332 - 6301.538: 0.2919% ( 8) 00:07:17.562 6301.538 - 6326.745: 0.3777% ( 10) 00:07:17.562 6326.745 - 6351.951: 0.5924% ( 25) 00:07:17.562 6351.951 - 6377.157: 0.7297% ( 16) 00:07:17.562 6377.157 - 6402.363: 0.8413% ( 13) 00:07:17.562 6402.363 - 6427.569: 0.9100% ( 8) 00:07:17.562 6427.569 - 6452.775: 1.0474% ( 16) 00:07:17.562 6452.775 - 6503.188: 1.2191% ( 20) 00:07:17.562 6503.188 - 6553.600: 1.3565% ( 16) 00:07:17.562 6553.600 - 6604.012: 1.6569% ( 35) 00:07:17.562 6604.012 - 6654.425: 1.8887% ( 27) 00:07:17.562 6654.425 - 6704.837: 2.1205% ( 27) 00:07:17.562 6704.837 - 6755.249: 2.3609% ( 28) 00:07:17.562 6755.249 - 6805.662: 2.6957% ( 39) 00:07:17.562 6805.662 - 6856.074: 3.0220% ( 38) 00:07:17.562 6856.074 - 6906.486: 3.4341% ( 48) 00:07:17.562 6906.486 - 6956.898: 3.7345% ( 35) 00:07:17.562 6956.898 - 7007.311: 4.0865% ( 41) 00:07:17.562 7007.311 - 7057.723: 4.6789% ( 69) 00:07:17.562 7057.723 - 7108.135: 5.1854% ( 59) 00:07:17.562 7108.135 - 7158.548: 5.6834% ( 58) 00:07:17.562 7158.548 - 7208.960: 6.1470% ( 54) 00:07:17.562 7208.960 - 7259.372: 6.4045% ( 30) 00:07:17.562 7259.372 - 7309.785: 6.7222% ( 37) 00:07:17.562 7309.785 - 7360.197: 6.9883% ( 31) 00:07:17.562 7360.197 - 7410.609: 7.3575% ( 43) 00:07:17.562 7410.609 - 7461.022: 7.6837% ( 38) 00:07:17.562 7461.022 - 7511.434: 7.9756% ( 34) 00:07:17.562 7511.434 - 7561.846: 8.1559% ( 21) 00:07:17.562 7561.846 - 7612.258: 8.2933% ( 16) 00:07:17.562 7612.258 - 7662.671: 8.3534% ( 7) 00:07:17.562 7662.671 - 7713.083: 8.4821% ( 15) 00:07:17.562 7713.083 - 7763.495: 8.6109% ( 15) 00:07:17.562 7763.495 - 7813.908: 8.6968% ( 10) 00:07:17.562 7813.908 - 7864.320: 8.7826% ( 10) 00:07:17.562 7864.320 - 7914.732: 8.8942% ( 13) 00:07:17.562 7914.732 - 7965.145: 9.0659% ( 20) 00:07:17.562 7965.145 - 8015.557: 9.2033% ( 16) 00:07:17.562 8015.557 - 8065.969: 9.4437% ( 28) 00:07:17.562 8065.969 - 8116.382: 9.8128% ( 43) 00:07:17.562 8116.382 - 8166.794: 10.2078% ( 46) 00:07:17.562 8166.794 - 8217.206: 10.5855% ( 44) 00:07:17.562 8217.206 - 8267.618: 11.0148% ( 50) 00:07:17.562 8267.618 - 8318.031: 11.6071% ( 69) 00:07:17.562 8318.031 - 8368.443: 12.1480% ( 63) 00:07:17.562 8368.443 - 8418.855: 12.5000% ( 41) 00:07:17.562 8418.855 - 8469.268: 12.8863% ( 45) 00:07:17.562 8469.268 - 8519.680: 13.3499% ( 54) 00:07:17.562 8519.680 - 8570.092: 13.8307% ( 56) 00:07:17.562 8570.092 - 8620.505: 14.3458% ( 60) 00:07:17.562 8620.505 - 8670.917: 14.8523% ( 59) 00:07:17.562 8670.917 - 8721.329: 15.3331% ( 56) 00:07:17.562 8721.329 - 8771.742: 15.9169% ( 68) 00:07:17.562 8771.742 - 8822.154: 16.7411% ( 96) 00:07:17.562 8822.154 - 8872.566: 17.3764% ( 74) 00:07:17.562 8872.566 - 8922.978: 17.9516% ( 67) 00:07:17.562 8922.978 - 8973.391: 18.5611% ( 71) 00:07:17.562 8973.391 - 9023.803: 19.2737% ( 83) 00:07:17.562 9023.803 - 9074.215: 20.0893% ( 95) 00:07:17.562 9074.215 - 9124.628: 20.8534% ( 89) 00:07:17.562 9124.628 - 9175.040: 21.7033% ( 99) 00:07:17.562 9175.040 - 9225.452: 22.2871% ( 68) 00:07:17.562 9225.452 - 9275.865: 22.9825% ( 81) 00:07:17.562 9275.865 - 9326.277: 23.7466% ( 89) 00:07:17.562 9326.277 - 9376.689: 24.6137% ( 101) 00:07:17.562 9376.689 - 9427.102: 25.3692% ( 88) 00:07:17.562 9427.102 - 9477.514: 26.2363% ( 101) 00:07:17.562 9477.514 - 9527.926: 27.1034% ( 101) 00:07:17.562 9527.926 - 9578.338: 27.9619% ( 100) 00:07:17.562 9578.338 - 9628.751: 28.5800% ( 72) 00:07:17.562 9628.751 - 9679.163: 29.4385% ( 100) 00:07:17.562 9679.163 - 9729.575: 30.1854% ( 87) 00:07:17.562 9729.575 - 9779.988: 30.9581% ( 90) 00:07:17.562 9779.988 - 9830.400: 31.7737% ( 95) 00:07:17.562 9830.400 - 9880.812: 32.8383% ( 124) 00:07:17.562 9880.812 - 9931.225: 33.9543% ( 130) 00:07:17.562 9931.225 - 9981.637: 34.9159% ( 112) 00:07:17.562 9981.637 - 10032.049: 36.1693% ( 146) 00:07:17.562 10032.049 - 10082.462: 37.5343% ( 159) 00:07:17.562 10082.462 - 10132.874: 38.7620% ( 143) 00:07:17.562 10132.874 - 10183.286: 39.7665% ( 117) 00:07:17.562 10183.286 - 10233.698: 40.6164% ( 99) 00:07:17.562 10233.698 - 10284.111: 41.5780% ( 112) 00:07:17.562 10284.111 - 10334.523: 42.4536% ( 102) 00:07:17.562 10334.523 - 10384.935: 43.4323% ( 114) 00:07:17.562 10384.935 - 10435.348: 44.6772% ( 145) 00:07:17.562 10435.348 - 10485.760: 45.8362% ( 135) 00:07:17.562 10485.760 - 10536.172: 47.1841% ( 157) 00:07:17.562 10536.172 - 10586.585: 48.2658% ( 126) 00:07:17.562 10586.585 - 10636.997: 49.3733% ( 129) 00:07:17.562 10636.997 - 10687.409: 50.3005% ( 108) 00:07:17.562 10687.409 - 10737.822: 51.0474% ( 87) 00:07:17.562 10737.822 - 10788.234: 51.9231% ( 102) 00:07:17.562 10788.234 - 10838.646: 52.7473% ( 96) 00:07:17.562 10838.646 - 10889.058: 53.6487% ( 105) 00:07:17.562 10889.058 - 10939.471: 54.4557% ( 94) 00:07:17.562 10939.471 - 10989.883: 55.4172% ( 112) 00:07:17.562 10989.883 - 11040.295: 56.3101% ( 104) 00:07:17.562 11040.295 - 11090.708: 57.2459% ( 109) 00:07:17.562 11090.708 - 11141.120: 58.3104% ( 124) 00:07:17.562 11141.120 - 11191.532: 59.2376% ( 108) 00:07:17.562 11191.532 - 11241.945: 60.0962% ( 100) 00:07:17.562 11241.945 - 11292.357: 60.9032% ( 94) 00:07:17.562 11292.357 - 11342.769: 61.5385% ( 74) 00:07:17.562 11342.769 - 11393.182: 62.2510% ( 83) 00:07:17.562 11393.182 - 11443.594: 63.0752% ( 96) 00:07:17.562 11443.594 - 11494.006: 63.7363% ( 77) 00:07:17.562 11494.006 - 11544.418: 64.3544% ( 72) 00:07:17.562 11544.418 - 11594.831: 65.2043% ( 99) 00:07:17.562 11594.831 - 11645.243: 66.1315% ( 108) 00:07:17.562 11645.243 - 11695.655: 67.2047% ( 125) 00:07:17.562 11695.655 - 11746.068: 68.2349% ( 120) 00:07:17.562 11746.068 - 11796.480: 69.1621% ( 108) 00:07:17.562 11796.480 - 11846.892: 70.0378% ( 102) 00:07:17.562 11846.892 - 11897.305: 71.1109% ( 125) 00:07:17.562 11897.305 - 11947.717: 72.2012% ( 127) 00:07:17.562 11947.717 - 11998.129: 73.0855% ( 103) 00:07:17.562 11998.129 - 12048.542: 74.0127% ( 108) 00:07:17.562 12048.542 - 12098.954: 74.7167% ( 82) 00:07:17.562 12098.954 - 12149.366: 75.4378% ( 84) 00:07:17.562 12149.366 - 12199.778: 76.2019% ( 89) 00:07:17.562 12199.778 - 12250.191: 76.9059% ( 82) 00:07:17.562 12250.191 - 12300.603: 77.6872% ( 91) 00:07:17.562 12300.603 - 12351.015: 78.2109% ( 61) 00:07:17.562 12351.015 - 12401.428: 78.8719% ( 77) 00:07:17.562 12401.428 - 12451.840: 79.7133% ( 98) 00:07:17.562 12451.840 - 12502.252: 80.2370% ( 61) 00:07:17.562 12502.252 - 12552.665: 80.7606% ( 61) 00:07:17.562 12552.665 - 12603.077: 81.3187% ( 65) 00:07:17.562 12603.077 - 12653.489: 81.9196% ( 70) 00:07:17.562 12653.489 - 12703.902: 82.3060% ( 45) 00:07:17.562 12703.902 - 12754.314: 82.7352% ( 50) 00:07:17.562 12754.314 - 12804.726: 83.1473% ( 48) 00:07:17.562 12804.726 - 12855.138: 83.5594% ( 48) 00:07:17.562 12855.138 - 12905.551: 83.8942% ( 39) 00:07:17.562 12905.551 - 13006.375: 84.6068% ( 83) 00:07:17.562 13006.375 - 13107.200: 85.1391% ( 62) 00:07:17.562 13107.200 - 13208.025: 85.5683% ( 50) 00:07:17.562 13208.025 - 13308.849: 85.9203% ( 41) 00:07:17.562 13308.849 - 13409.674: 86.2466% ( 38) 00:07:17.562 13409.674 - 13510.498: 86.7359% ( 57) 00:07:17.562 13510.498 - 13611.323: 87.2339% ( 58) 00:07:17.562 13611.323 - 13712.148: 87.7146% ( 56) 00:07:17.562 13712.148 - 13812.972: 88.0666% ( 41) 00:07:17.562 13812.972 - 13913.797: 88.4959% ( 50) 00:07:17.562 13913.797 - 14014.622: 88.8994% ( 47) 00:07:17.562 14014.622 - 14115.446: 89.2685% ( 43) 00:07:17.562 14115.446 - 14216.271: 89.6549% ( 45) 00:07:17.562 14216.271 - 14317.095: 90.0498% ( 46) 00:07:17.562 14317.095 - 14417.920: 90.5477% ( 58) 00:07:17.562 14417.920 - 14518.745: 90.9427% ( 46) 00:07:17.562 14518.745 - 14619.569: 91.3805% ( 51) 00:07:17.562 14619.569 - 14720.394: 91.9385% ( 65) 00:07:17.562 14720.394 - 14821.218: 92.4536% ( 60) 00:07:17.562 14821.218 - 14922.043: 92.9001% ( 52) 00:07:17.562 14922.043 - 15022.868: 93.2606% ( 42) 00:07:17.562 15022.868 - 15123.692: 93.8359% ( 67) 00:07:17.562 15123.692 - 15224.517: 94.2737% ( 51) 00:07:17.562 15224.517 - 15325.342: 94.5999% ( 38) 00:07:17.562 15325.342 - 15426.166: 94.8832% ( 33) 00:07:17.562 15426.166 - 15526.991: 95.1322% ( 29) 00:07:17.562 15526.991 - 15627.815: 95.3468% ( 25) 00:07:17.562 15627.815 - 15728.640: 95.6559% ( 36) 00:07:17.562 15728.640 - 15829.465: 95.9049% ( 29) 00:07:17.562 15829.465 - 15930.289: 96.1882% ( 33) 00:07:17.562 15930.289 - 16031.114: 96.4028% ( 25) 00:07:17.562 16031.114 - 16131.938: 96.5402% ( 16) 00:07:17.562 16131.938 - 16232.763: 96.6690% ( 15) 00:07:17.562 16232.763 - 16333.588: 96.8836% ( 25) 00:07:17.562 16333.588 - 16434.412: 96.9952% ( 13) 00:07:17.562 16434.412 - 16535.237: 97.0896% ( 11) 00:07:17.562 16535.237 - 16636.062: 97.2356% ( 17) 00:07:17.562 16636.062 - 16736.886: 97.4416% ( 24) 00:07:17.562 16736.886 - 16837.711: 97.5704% ( 15) 00:07:17.562 16837.711 - 16938.535: 97.8280% ( 30) 00:07:17.562 16938.535 - 17039.360: 98.0340% ( 24) 00:07:17.562 17039.360 - 17140.185: 98.1971% ( 19) 00:07:17.562 17140.185 - 17241.009: 98.2658% ( 8) 00:07:17.562 17241.009 - 17341.834: 98.3602% ( 11) 00:07:17.562 17341.834 - 17442.658: 98.4890% ( 15) 00:07:17.562 17442.658 - 17543.483: 98.6006% ( 13) 00:07:17.562 17543.483 - 17644.308: 98.6693% ( 8) 00:07:17.562 17644.308 - 17745.132: 98.7208% ( 6) 00:07:17.562 17745.132 - 17845.957: 98.7552% ( 4) 00:07:17.562 17845.957 - 17946.782: 98.7981% ( 5) 00:07:17.562 17946.782 - 18047.606: 98.8324% ( 4) 00:07:17.563 18047.606 - 18148.431: 98.8839% ( 6) 00:07:17.563 18148.431 - 18249.255: 98.9011% ( 2) 00:07:17.563 22685.538 - 22786.363: 98.9526% ( 6) 00:07:17.563 22786.363 - 22887.188: 99.0041% ( 6) 00:07:17.563 22887.188 - 22988.012: 99.0385% ( 4) 00:07:17.563 22988.012 - 23088.837: 99.0900% ( 6) 00:07:17.563 23088.837 - 23189.662: 99.1415% ( 6) 00:07:17.563 23189.662 - 23290.486: 99.1844% ( 5) 00:07:17.563 23290.486 - 23391.311: 99.2359% ( 6) 00:07:17.563 23391.311 - 23492.135: 99.2788% ( 5) 00:07:17.563 23492.135 - 23592.960: 99.3304% ( 6) 00:07:17.563 23592.960 - 23693.785: 99.3733% ( 5) 00:07:17.563 23693.785 - 23794.609: 99.4248% ( 6) 00:07:17.563 23794.609 - 23895.434: 99.4505% ( 3) 00:07:17.563 30449.034 - 30650.683: 99.5021% ( 6) 00:07:17.563 30650.683 - 30852.332: 99.5622% ( 7) 00:07:17.563 30852.332 - 31053.982: 99.6137% ( 6) 00:07:17.563 31053.982 - 31255.631: 99.6995% ( 10) 00:07:17.563 31255.631 - 31457.280: 99.7596% ( 7) 00:07:17.563 31457.280 - 31658.929: 99.8283% ( 8) 00:07:17.563 31658.929 - 31860.578: 99.8884% ( 7) 00:07:17.563 31860.578 - 32062.228: 99.9571% ( 8) 00:07:17.563 32062.228 - 32263.877: 100.0000% ( 5) 00:07:17.563 00:07:17.563 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:17.563 ============================================================================== 00:07:17.563 Range in us Cumulative IO count 00:07:17.563 5797.415 - 5822.622: 0.0258% ( 3) 00:07:17.563 5822.622 - 5847.828: 0.0773% ( 6) 00:07:17.563 5847.828 - 5873.034: 0.1288% ( 6) 00:07:17.563 5873.034 - 5898.240: 0.1803% ( 6) 00:07:17.563 5898.240 - 5923.446: 0.2318% ( 6) 00:07:17.563 5923.446 - 5948.652: 0.3005% ( 8) 00:07:17.563 5948.652 - 5973.858: 0.3348% ( 4) 00:07:17.563 5973.858 - 5999.065: 0.3520% ( 2) 00:07:17.563 5999.065 - 6024.271: 0.3692% ( 2) 00:07:17.563 6024.271 - 6049.477: 0.3949% ( 3) 00:07:17.563 6049.477 - 6074.683: 0.4207% ( 3) 00:07:17.563 6074.683 - 6099.889: 0.4464% ( 3) 00:07:17.563 6099.889 - 6125.095: 0.4722% ( 3) 00:07:17.563 6125.095 - 6150.302: 0.4894% ( 2) 00:07:17.563 6150.302 - 6175.508: 0.5151% ( 3) 00:07:17.563 6175.508 - 6200.714: 0.5323% ( 2) 00:07:17.563 6200.714 - 6225.920: 0.5495% ( 2) 00:07:17.563 6225.920 - 6251.126: 0.5666% ( 2) 00:07:17.563 6276.332 - 6301.538: 0.6010% ( 4) 00:07:17.563 6301.538 - 6326.745: 0.6439% ( 5) 00:07:17.563 6326.745 - 6351.951: 0.6782% ( 4) 00:07:17.563 6351.951 - 6377.157: 0.7126% ( 4) 00:07:17.563 6377.157 - 6402.363: 0.7898% ( 9) 00:07:17.563 6402.363 - 6427.569: 1.0646% ( 32) 00:07:17.563 6427.569 - 6452.775: 1.2277% ( 19) 00:07:17.563 6452.775 - 6503.188: 1.7170% ( 57) 00:07:17.563 6503.188 - 6553.600: 1.9145% ( 23) 00:07:17.563 6553.600 - 6604.012: 2.0776% ( 19) 00:07:17.563 6604.012 - 6654.425: 2.2579% ( 21) 00:07:17.563 6654.425 - 6704.837: 2.4983% ( 28) 00:07:17.563 6704.837 - 6755.249: 2.8503% ( 41) 00:07:17.563 6755.249 - 6805.662: 3.0649% ( 25) 00:07:17.563 6805.662 - 6856.074: 3.1765% ( 13) 00:07:17.563 6856.074 - 6906.486: 3.3396% ( 19) 00:07:17.563 6906.486 - 6956.898: 3.5199% ( 21) 00:07:17.563 6956.898 - 7007.311: 4.2325% ( 83) 00:07:17.563 7007.311 - 7057.723: 4.8935% ( 77) 00:07:17.563 7057.723 - 7108.135: 5.2284% ( 39) 00:07:17.563 7108.135 - 7158.548: 5.8808% ( 76) 00:07:17.563 7158.548 - 7208.960: 6.8595% ( 114) 00:07:17.563 7208.960 - 7259.372: 7.2030% ( 40) 00:07:17.563 7259.372 - 7309.785: 7.6837% ( 56) 00:07:17.563 7309.785 - 7360.197: 7.9928% ( 36) 00:07:17.563 7360.197 - 7410.609: 8.1044% ( 13) 00:07:17.563 7410.609 - 7461.022: 8.1645% ( 7) 00:07:17.563 7461.022 - 7511.434: 8.2332% ( 8) 00:07:17.563 7511.434 - 7561.846: 8.2933% ( 7) 00:07:17.563 7561.846 - 7612.258: 8.3448% ( 6) 00:07:17.563 7612.258 - 7662.671: 8.3877% ( 5) 00:07:17.563 7662.671 - 7713.083: 8.4478% ( 7) 00:07:17.563 7713.083 - 7763.495: 8.6109% ( 19) 00:07:17.563 7763.495 - 7813.908: 8.6453% ( 4) 00:07:17.563 7813.908 - 7864.320: 8.6796% ( 4) 00:07:17.563 7864.320 - 7914.732: 8.7225% ( 5) 00:07:17.563 7914.732 - 7965.145: 8.7912% ( 8) 00:07:17.563 7965.145 - 8015.557: 8.9114% ( 14) 00:07:17.563 8015.557 - 8065.969: 9.0144% ( 12) 00:07:17.563 8065.969 - 8116.382: 9.2634% ( 29) 00:07:17.563 8116.382 - 8166.794: 9.4437% ( 21) 00:07:17.563 8166.794 - 8217.206: 9.6240% ( 21) 00:07:17.563 8217.206 - 8267.618: 10.1219% ( 58) 00:07:17.563 8267.618 - 8318.031: 10.5598% ( 51) 00:07:17.563 8318.031 - 8368.443: 11.0148% ( 53) 00:07:17.563 8368.443 - 8418.855: 11.4955% ( 56) 00:07:17.563 8418.855 - 8469.268: 12.0192% ( 61) 00:07:17.563 8469.268 - 8519.680: 12.7146% ( 81) 00:07:17.563 8519.680 - 8570.092: 13.4186% ( 82) 00:07:17.563 8570.092 - 8620.505: 14.0367% ( 72) 00:07:17.563 8620.505 - 8670.917: 14.7150% ( 79) 00:07:17.563 8670.917 - 8721.329: 15.2816% ( 66) 00:07:17.563 8721.329 - 8771.742: 15.9512% ( 78) 00:07:17.563 8771.742 - 8822.154: 16.5865% ( 74) 00:07:17.563 8822.154 - 8872.566: 17.1789% ( 69) 00:07:17.563 8872.566 - 8922.978: 17.9602% ( 91) 00:07:17.563 8922.978 - 8973.391: 19.0247% ( 124) 00:07:17.563 8973.391 - 9023.803: 20.0807% ( 123) 00:07:17.563 9023.803 - 9074.215: 20.8104% ( 85) 00:07:17.563 9074.215 - 9124.628: 21.6690% ( 100) 00:07:17.563 9124.628 - 9175.040: 22.3300% ( 77) 00:07:17.563 9175.040 - 9225.452: 22.9310% ( 70) 00:07:17.563 9225.452 - 9275.865: 23.6607% ( 85) 00:07:17.563 9275.865 - 9326.277: 24.2188% ( 65) 00:07:17.563 9326.277 - 9376.689: 24.6995% ( 56) 00:07:17.563 9376.689 - 9427.102: 25.3692% ( 78) 00:07:17.563 9427.102 - 9477.514: 26.1247% ( 88) 00:07:17.563 9477.514 - 9527.926: 26.9059% ( 91) 00:07:17.563 9527.926 - 9578.338: 27.9876% ( 126) 00:07:17.563 9578.338 - 9628.751: 28.7517% ( 89) 00:07:17.563 9628.751 - 9679.163: 29.4042% ( 76) 00:07:17.563 9679.163 - 9729.575: 30.2370% ( 97) 00:07:17.563 9729.575 - 9779.988: 31.0010% ( 89) 00:07:17.563 9779.988 - 9830.400: 31.8424% ( 98) 00:07:17.563 9830.400 - 9880.812: 32.7696% ( 108) 00:07:17.563 9880.812 - 9931.225: 33.5766% ( 94) 00:07:17.563 9931.225 - 9981.637: 34.6068% ( 120) 00:07:17.563 9981.637 - 10032.049: 35.6198% ( 118) 00:07:17.563 10032.049 - 10082.462: 36.9420% ( 154) 00:07:17.563 10082.462 - 10132.874: 38.4100% ( 171) 00:07:17.563 10132.874 - 10183.286: 39.7150% ( 152) 00:07:17.563 10183.286 - 10233.698: 40.8740% ( 135) 00:07:17.563 10233.698 - 10284.111: 42.0845% ( 141) 00:07:17.563 10284.111 - 10334.523: 43.2349% ( 134) 00:07:17.563 10334.523 - 10384.935: 44.4111% ( 137) 00:07:17.563 10384.935 - 10435.348: 45.5701% ( 135) 00:07:17.563 10435.348 - 10485.760: 46.4114% ( 98) 00:07:17.563 10485.760 - 10536.172: 47.2699% ( 100) 00:07:17.563 10536.172 - 10586.585: 48.1113% ( 98) 00:07:17.563 10586.585 - 10636.997: 49.1930% ( 126) 00:07:17.563 10636.997 - 10687.409: 50.1116% ( 107) 00:07:17.563 10687.409 - 10737.822: 51.0646% ( 111) 00:07:17.563 10737.822 - 10788.234: 51.9488% ( 103) 00:07:17.563 10788.234 - 10838.646: 52.6442% ( 81) 00:07:17.563 10838.646 - 10889.058: 53.5543% ( 106) 00:07:17.563 10889.058 - 10939.471: 54.5673% ( 118) 00:07:17.563 10939.471 - 10989.883: 55.4688% ( 105) 00:07:17.563 10989.883 - 11040.295: 56.4646% ( 116) 00:07:17.563 11040.295 - 11090.708: 57.3575% ( 104) 00:07:17.563 11090.708 - 11141.120: 58.3190% ( 112) 00:07:17.563 11141.120 - 11191.532: 59.4093% ( 127) 00:07:17.563 11191.532 - 11241.945: 60.3795% ( 113) 00:07:17.563 11241.945 - 11292.357: 61.8046% ( 166) 00:07:17.563 11292.357 - 11342.769: 62.5258% ( 84) 00:07:17.563 11342.769 - 11393.182: 63.2555% ( 85) 00:07:17.563 11393.182 - 11443.594: 64.5519% ( 151) 00:07:17.563 11443.594 - 11494.006: 65.4361% ( 103) 00:07:17.563 11494.006 - 11544.418: 66.1487% ( 83) 00:07:17.563 11544.418 - 11594.831: 66.6466% ( 58) 00:07:17.563 11594.831 - 11645.243: 67.3764% ( 85) 00:07:17.563 11645.243 - 11695.655: 67.8915% ( 60) 00:07:17.563 11695.655 - 11746.068: 68.4409% ( 64) 00:07:17.563 11746.068 - 11796.480: 69.1621% ( 84) 00:07:17.563 11796.480 - 11846.892: 69.7630% ( 70) 00:07:17.563 11846.892 - 11897.305: 70.3125% ( 64) 00:07:17.563 11897.305 - 11947.717: 71.1109% ( 93) 00:07:17.563 11947.717 - 11998.129: 71.7634% ( 76) 00:07:17.563 11998.129 - 12048.542: 72.3729% ( 71) 00:07:17.563 12048.542 - 12098.954: 73.2916% ( 107) 00:07:17.563 12098.954 - 12149.366: 73.9955% ( 82) 00:07:17.563 12149.366 - 12199.778: 74.6995% ( 82) 00:07:17.563 12199.778 - 12250.191: 75.4464% ( 87) 00:07:17.563 12250.191 - 12300.603: 76.1504% ( 82) 00:07:17.563 12300.603 - 12351.015: 76.8372% ( 80) 00:07:17.563 12351.015 - 12401.428: 77.6957% ( 100) 00:07:17.563 12401.428 - 12451.840: 78.4942% ( 93) 00:07:17.563 12451.840 - 12502.252: 79.2067% ( 83) 00:07:17.563 12502.252 - 12552.665: 79.8592% ( 76) 00:07:17.563 12552.665 - 12603.077: 80.5632% ( 82) 00:07:17.563 12603.077 - 12653.489: 81.0440% ( 56) 00:07:17.563 12653.489 - 12703.902: 81.5419% ( 58) 00:07:17.563 12703.902 - 12754.314: 81.9626% ( 49) 00:07:17.563 12754.314 - 12804.726: 82.4262% ( 54) 00:07:17.563 12804.726 - 12855.138: 82.9155% ( 57) 00:07:17.563 12855.138 - 12905.551: 83.3448% ( 50) 00:07:17.563 12905.551 - 13006.375: 84.1518% ( 94) 00:07:17.563 13006.375 - 13107.200: 85.0532% ( 105) 00:07:17.563 13107.200 - 13208.025: 85.9032% ( 99) 00:07:17.563 13208.025 - 13308.849: 86.4870% ( 68) 00:07:17.563 13308.849 - 13409.674: 87.0450% ( 65) 00:07:17.563 13409.674 - 13510.498: 87.6202% ( 67) 00:07:17.563 13510.498 - 13611.323: 88.0495% ( 50) 00:07:17.563 13611.323 - 13712.148: 88.4100% ( 42) 00:07:17.564 13712.148 - 13812.972: 88.7706% ( 42) 00:07:17.564 13812.972 - 13913.797: 88.9852% ( 25) 00:07:17.564 13913.797 - 14014.622: 89.3201% ( 39) 00:07:17.564 14014.622 - 14115.446: 89.6120% ( 34) 00:07:17.564 14115.446 - 14216.271: 89.8781% ( 31) 00:07:17.564 14216.271 - 14317.095: 90.1099% ( 27) 00:07:17.564 14317.095 - 14417.920: 90.5048% ( 46) 00:07:17.564 14417.920 - 14518.745: 90.8997% ( 46) 00:07:17.564 14518.745 - 14619.569: 91.2517% ( 41) 00:07:17.564 14619.569 - 14720.394: 91.6295% ( 44) 00:07:17.564 14720.394 - 14821.218: 92.2218% ( 69) 00:07:17.564 14821.218 - 14922.043: 93.1490% ( 108) 00:07:17.564 14922.043 - 15022.868: 93.5869% ( 51) 00:07:17.564 15022.868 - 15123.692: 94.0161% ( 50) 00:07:17.564 15123.692 - 15224.517: 94.3080% ( 34) 00:07:17.564 15224.517 - 15325.342: 94.5398% ( 27) 00:07:17.564 15325.342 - 15426.166: 94.7030% ( 19) 00:07:17.564 15426.166 - 15526.991: 94.9262% ( 26) 00:07:17.564 15526.991 - 15627.815: 95.1837% ( 30) 00:07:17.564 15627.815 - 15728.640: 95.5872% ( 47) 00:07:17.564 15728.640 - 15829.465: 95.7933% ( 24) 00:07:17.564 15829.465 - 15930.289: 96.0766% ( 33) 00:07:17.564 15930.289 - 16031.114: 96.3255% ( 29) 00:07:17.564 16031.114 - 16131.938: 96.4715% ( 17) 00:07:17.564 16131.938 - 16232.763: 96.6346% ( 19) 00:07:17.564 16232.763 - 16333.588: 96.7634% ( 15) 00:07:17.564 16333.588 - 16434.412: 96.8836% ( 14) 00:07:17.564 16434.412 - 16535.237: 97.0381% ( 18) 00:07:17.564 16535.237 - 16636.062: 97.2184% ( 21) 00:07:17.564 16636.062 - 16736.886: 97.5275% ( 36) 00:07:17.564 16736.886 - 16837.711: 97.7593% ( 27) 00:07:17.564 16837.711 - 16938.535: 98.0168% ( 30) 00:07:17.564 16938.535 - 17039.360: 98.2315% ( 25) 00:07:17.564 17039.360 - 17140.185: 98.4032% ( 20) 00:07:17.564 17140.185 - 17241.009: 98.5577% ( 18) 00:07:17.564 17241.009 - 17341.834: 98.6521% ( 11) 00:07:17.564 17341.834 - 17442.658: 98.7294% ( 9) 00:07:17.564 17442.658 - 17543.483: 98.7637% ( 4) 00:07:17.564 17543.483 - 17644.308: 98.7981% ( 4) 00:07:17.564 17644.308 - 17745.132: 98.8324% ( 4) 00:07:17.564 17745.132 - 17845.957: 98.8668% ( 4) 00:07:17.564 17845.957 - 17946.782: 98.9011% ( 4) 00:07:17.564 22383.065 - 22483.889: 98.9097% ( 1) 00:07:17.564 22483.889 - 22584.714: 98.9612% ( 6) 00:07:17.564 22584.714 - 22685.538: 99.0213% ( 7) 00:07:17.564 22685.538 - 22786.363: 99.0814% ( 7) 00:07:17.564 22786.363 - 22887.188: 99.1243% ( 5) 00:07:17.564 22887.188 - 22988.012: 99.1844% ( 7) 00:07:17.564 22988.012 - 23088.837: 99.2445% ( 7) 00:07:17.564 23088.837 - 23189.662: 99.2960% ( 6) 00:07:17.564 23189.662 - 23290.486: 99.3561% ( 7) 00:07:17.564 23290.486 - 23391.311: 99.4076% ( 6) 00:07:17.564 23391.311 - 23492.135: 99.4505% ( 5) 00:07:17.564 30449.034 - 30650.683: 99.5278% ( 9) 00:07:17.564 30650.683 - 30852.332: 99.6051% ( 9) 00:07:17.564 30852.332 - 31053.982: 99.6823% ( 9) 00:07:17.564 31053.982 - 31255.631: 99.7596% ( 9) 00:07:17.564 31255.631 - 31457.280: 99.8283% ( 8) 00:07:17.564 31457.280 - 31658.929: 99.9141% ( 10) 00:07:17.564 31658.929 - 31860.578: 99.9742% ( 7) 00:07:17.564 31860.578 - 32062.228: 100.0000% ( 3) 00:07:17.564 00:07:17.564 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:17.564 ============================================================================== 00:07:17.564 Range in us Cumulative IO count 00:07:17.564 4713.551 - 4738.757: 0.0172% ( 2) 00:07:17.564 4738.757 - 4763.963: 0.0773% ( 7) 00:07:17.564 4763.963 - 4789.169: 0.1030% ( 3) 00:07:17.564 4789.169 - 4814.375: 0.1545% ( 6) 00:07:17.564 4814.375 - 4839.582: 0.2404% ( 10) 00:07:17.564 4839.582 - 4864.788: 0.3262% ( 10) 00:07:17.564 4864.788 - 4889.994: 0.3606% ( 4) 00:07:17.564 4889.994 - 4915.200: 0.3777% ( 2) 00:07:17.564 4915.200 - 4940.406: 0.3949% ( 2) 00:07:17.564 4940.406 - 4965.612: 0.4207% ( 3) 00:07:17.564 4965.612 - 4990.818: 0.4293% ( 1) 00:07:17.564 4990.818 - 5016.025: 0.4464% ( 2) 00:07:17.564 5016.025 - 5041.231: 0.4636% ( 2) 00:07:17.564 5041.231 - 5066.437: 0.4808% ( 2) 00:07:17.564 5066.437 - 5091.643: 0.4979% ( 2) 00:07:17.564 5091.643 - 5116.849: 0.5151% ( 2) 00:07:17.564 5116.849 - 5142.055: 0.5409% ( 3) 00:07:17.564 5142.055 - 5167.262: 0.5495% ( 1) 00:07:17.564 5873.034 - 5898.240: 0.5580% ( 1) 00:07:17.564 6099.889 - 6125.095: 0.5666% ( 1) 00:07:17.564 6125.095 - 6150.302: 0.5838% ( 2) 00:07:17.564 6150.302 - 6175.508: 0.6095% ( 3) 00:07:17.564 6175.508 - 6200.714: 0.6439% ( 4) 00:07:17.564 6200.714 - 6225.920: 0.6696% ( 3) 00:07:17.564 6225.920 - 6251.126: 0.7212% ( 6) 00:07:17.564 6251.126 - 6276.332: 0.7555% ( 4) 00:07:17.564 6276.332 - 6301.538: 0.9358% ( 21) 00:07:17.564 6301.538 - 6326.745: 0.9701% ( 4) 00:07:17.564 6326.745 - 6351.951: 0.9959% ( 3) 00:07:17.564 6351.951 - 6377.157: 1.0474% ( 6) 00:07:17.564 6377.157 - 6402.363: 1.0646% ( 2) 00:07:17.564 6402.363 - 6427.569: 1.0817% ( 2) 00:07:17.564 6427.569 - 6452.775: 1.1247% ( 5) 00:07:17.564 6452.775 - 6503.188: 1.1933% ( 8) 00:07:17.564 6503.188 - 6553.600: 1.2792% ( 10) 00:07:17.564 6553.600 - 6604.012: 1.4337% ( 18) 00:07:17.564 6604.012 - 6654.425: 1.7685% ( 39) 00:07:17.564 6654.425 - 6704.837: 1.9488% ( 21) 00:07:17.564 6704.837 - 6755.249: 2.4468% ( 58) 00:07:17.564 6755.249 - 6805.662: 2.5755% ( 15) 00:07:17.564 6805.662 - 6856.074: 2.6957% ( 14) 00:07:17.564 6856.074 - 6906.486: 2.8245% ( 15) 00:07:17.564 6906.486 - 6956.898: 3.0563% ( 27) 00:07:17.564 6956.898 - 7007.311: 3.4169% ( 42) 00:07:17.564 7007.311 - 7057.723: 4.0608% ( 75) 00:07:17.564 7057.723 - 7108.135: 4.6360% ( 67) 00:07:17.564 7108.135 - 7158.548: 5.3657% ( 85) 00:07:17.564 7158.548 - 7208.960: 5.7606% ( 46) 00:07:17.564 7208.960 - 7259.372: 6.4818% ( 84) 00:07:17.564 7259.372 - 7309.785: 6.7909% ( 36) 00:07:17.564 7309.785 - 7360.197: 7.1429% ( 41) 00:07:17.564 7360.197 - 7410.609: 7.4863% ( 40) 00:07:17.564 7410.609 - 7461.022: 7.7696% ( 33) 00:07:17.564 7461.022 - 7511.434: 8.0014% ( 27) 00:07:17.564 7511.434 - 7561.846: 8.2503% ( 29) 00:07:17.564 7561.846 - 7612.258: 8.4478% ( 23) 00:07:17.564 7612.258 - 7662.671: 8.7569% ( 36) 00:07:17.564 7662.671 - 7713.083: 9.1089% ( 41) 00:07:17.564 7713.083 - 7763.495: 9.3836% ( 32) 00:07:17.564 7763.495 - 7813.908: 9.7012% ( 37) 00:07:17.564 7813.908 - 7864.320: 9.8815% ( 21) 00:07:17.564 7864.320 - 7914.732: 10.1477% ( 31) 00:07:17.564 7914.732 - 7965.145: 10.5082% ( 42) 00:07:17.564 7965.145 - 8015.557: 10.7057% ( 23) 00:07:17.564 8015.557 - 8065.969: 10.8001% ( 11) 00:07:17.564 8065.969 - 8116.382: 10.8774% ( 9) 00:07:17.564 8116.382 - 8166.794: 10.9633% ( 10) 00:07:17.564 8166.794 - 8217.206: 11.1178% ( 18) 00:07:17.564 8217.206 - 8267.618: 11.2723% ( 18) 00:07:17.564 8267.618 - 8318.031: 11.5127% ( 28) 00:07:17.564 8318.031 - 8368.443: 11.8304% ( 37) 00:07:17.564 8368.443 - 8418.855: 12.2854% ( 53) 00:07:17.564 8418.855 - 8469.268: 12.6889% ( 47) 00:07:17.564 8469.268 - 8519.680: 13.3242% ( 74) 00:07:17.564 8519.680 - 8570.092: 13.7620% ( 51) 00:07:17.564 8570.092 - 8620.505: 14.5776% ( 95) 00:07:17.564 8620.505 - 8670.917: 15.1614% ( 68) 00:07:17.564 8670.917 - 8721.329: 15.7881% ( 73) 00:07:17.564 8721.329 - 8771.742: 16.5093% ( 84) 00:07:17.564 8771.742 - 8822.154: 17.4193% ( 106) 00:07:17.564 8822.154 - 8872.566: 18.0546% ( 74) 00:07:17.564 8872.566 - 8922.978: 18.5611% ( 59) 00:07:17.564 8922.978 - 8973.391: 19.0161% ( 53) 00:07:17.564 8973.391 - 9023.803: 19.4969% ( 56) 00:07:17.564 9023.803 - 9074.215: 20.0721% ( 67) 00:07:17.564 9074.215 - 9124.628: 20.6988% ( 73) 00:07:17.564 9124.628 - 9175.040: 21.6861% ( 115) 00:07:17.564 9175.040 - 9225.452: 22.6047% ( 107) 00:07:17.564 9225.452 - 9275.865: 23.6865% ( 126) 00:07:17.564 9275.865 - 9326.277: 24.5450% ( 100) 00:07:17.564 9326.277 - 9376.689: 25.4636% ( 107) 00:07:17.564 9376.689 - 9427.102: 26.1933% ( 85) 00:07:17.564 9427.102 - 9477.514: 26.8372% ( 75) 00:07:17.564 9477.514 - 9527.926: 27.7129% ( 102) 00:07:17.564 9527.926 - 9578.338: 28.6916% ( 114) 00:07:17.564 9578.338 - 9628.751: 29.6617% ( 113) 00:07:17.564 9628.751 - 9679.163: 30.5889% ( 108) 00:07:17.564 9679.163 - 9729.575: 31.3444% ( 88) 00:07:17.564 9729.575 - 9779.988: 32.3317% ( 115) 00:07:17.564 9779.988 - 9830.400: 33.3534% ( 119) 00:07:17.564 9830.400 - 9880.812: 34.5124% ( 135) 00:07:17.564 9880.812 - 9931.225: 35.7143% ( 140) 00:07:17.564 9931.225 - 9981.637: 36.6415% ( 108) 00:07:17.564 9981.637 - 10032.049: 37.6459% ( 117) 00:07:17.564 10032.049 - 10082.462: 38.4873% ( 98) 00:07:17.564 10082.462 - 10132.874: 39.4660% ( 114) 00:07:17.564 10132.874 - 10183.286: 40.4533% ( 115) 00:07:17.564 10183.286 - 10233.698: 41.3376% ( 103) 00:07:17.564 10233.698 - 10284.111: 42.2047% ( 101) 00:07:17.564 10284.111 - 10334.523: 43.3293% ( 131) 00:07:17.564 10334.523 - 10384.935: 44.2909% ( 112) 00:07:17.564 10384.935 - 10435.348: 45.3211% ( 120) 00:07:17.564 10435.348 - 10485.760: 46.1882% ( 101) 00:07:17.564 10485.760 - 10536.172: 47.0810% ( 104) 00:07:17.564 10536.172 - 10586.585: 47.7078% ( 73) 00:07:17.564 10586.585 - 10636.997: 48.2744% ( 66) 00:07:17.564 10636.997 - 10687.409: 49.2445% ( 113) 00:07:17.564 10687.409 - 10737.822: 50.0343% ( 92) 00:07:17.564 10737.822 - 10788.234: 50.9444% ( 106) 00:07:17.564 10788.234 - 10838.646: 51.6054% ( 77) 00:07:17.564 10838.646 - 10889.058: 52.5240% ( 107) 00:07:17.564 10889.058 - 10939.471: 53.4598% ( 109) 00:07:17.564 10939.471 - 10989.883: 54.4986% ( 121) 00:07:17.564 10989.883 - 11040.295: 55.4430% ( 110) 00:07:17.564 11040.295 - 11090.708: 56.5419% ( 128) 00:07:17.565 11090.708 - 11141.120: 57.5464% ( 117) 00:07:17.565 11141.120 - 11191.532: 58.4650% ( 107) 00:07:17.565 11191.532 - 11241.945: 59.4437% ( 114) 00:07:17.565 11241.945 - 11292.357: 60.3280% ( 103) 00:07:17.565 11292.357 - 11342.769: 61.7273% ( 163) 00:07:17.565 11342.769 - 11393.182: 62.7919% ( 124) 00:07:17.565 11393.182 - 11443.594: 63.6247% ( 97) 00:07:17.565 11443.594 - 11494.006: 64.5862% ( 112) 00:07:17.565 11494.006 - 11544.418: 65.4876% ( 105) 00:07:17.565 11544.418 - 11594.831: 66.3633% ( 102) 00:07:17.565 11594.831 - 11645.243: 67.2562% ( 104) 00:07:17.565 11645.243 - 11695.655: 68.1748% ( 107) 00:07:17.565 11695.655 - 11746.068: 69.0677% ( 104) 00:07:17.565 11746.068 - 11796.480: 69.8146% ( 87) 00:07:17.565 11796.480 - 11846.892: 70.5014% ( 80) 00:07:17.565 11846.892 - 11897.305: 71.1624% ( 77) 00:07:17.565 11897.305 - 11947.717: 71.7720% ( 71) 00:07:17.565 11947.717 - 11998.129: 72.4073% ( 74) 00:07:17.565 11998.129 - 12048.542: 73.1456% ( 86) 00:07:17.565 12048.542 - 12098.954: 74.1071% ( 112) 00:07:17.565 12098.954 - 12149.366: 75.1030% ( 116) 00:07:17.565 12149.366 - 12199.778: 75.7984% ( 81) 00:07:17.565 12199.778 - 12250.191: 76.4509% ( 76) 00:07:17.565 12250.191 - 12300.603: 77.0089% ( 65) 00:07:17.565 12300.603 - 12351.015: 77.6271% ( 72) 00:07:17.565 12351.015 - 12401.428: 78.1593% ( 62) 00:07:17.565 12401.428 - 12451.840: 78.7088% ( 64) 00:07:17.565 12451.840 - 12502.252: 79.3956% ( 80) 00:07:17.565 12502.252 - 12552.665: 80.0738% ( 79) 00:07:17.565 12552.665 - 12603.077: 80.7263% ( 76) 00:07:17.565 12603.077 - 12653.489: 81.4904% ( 89) 00:07:17.565 12653.489 - 12703.902: 82.2201% ( 85) 00:07:17.565 12703.902 - 12754.314: 82.8898% ( 78) 00:07:17.565 12754.314 - 12804.726: 83.5165% ( 73) 00:07:17.565 12804.726 - 12855.138: 83.9372% ( 49) 00:07:17.565 12855.138 - 12905.551: 84.2634% ( 38) 00:07:17.565 12905.551 - 13006.375: 84.9073% ( 75) 00:07:17.565 13006.375 - 13107.200: 85.5426% ( 74) 00:07:17.565 13107.200 - 13208.025: 86.2036% ( 77) 00:07:17.565 13208.025 - 13308.849: 86.9935% ( 92) 00:07:17.565 13308.849 - 13409.674: 87.6288% ( 74) 00:07:17.565 13409.674 - 13510.498: 88.0666% ( 51) 00:07:17.565 13510.498 - 13611.323: 88.5817% ( 60) 00:07:17.565 13611.323 - 13712.148: 89.0453% ( 54) 00:07:17.565 13712.148 - 13812.972: 89.6120% ( 66) 00:07:17.565 13812.972 - 13913.797: 89.8523% ( 28) 00:07:17.565 13913.797 - 14014.622: 90.0240% ( 20) 00:07:17.565 14014.622 - 14115.446: 90.2129% ( 22) 00:07:17.565 14115.446 - 14216.271: 90.3674% ( 18) 00:07:17.565 14216.271 - 14317.095: 90.5391% ( 20) 00:07:17.565 14317.095 - 14417.920: 90.9770% ( 51) 00:07:17.565 14417.920 - 14518.745: 91.4234% ( 52) 00:07:17.565 14518.745 - 14619.569: 91.7067% ( 33) 00:07:17.565 14619.569 - 14720.394: 92.0330% ( 38) 00:07:17.565 14720.394 - 14821.218: 92.3678% ( 39) 00:07:17.565 14821.218 - 14922.043: 92.7541% ( 45) 00:07:17.565 14922.043 - 15022.868: 93.0632% ( 36) 00:07:17.565 15022.868 - 15123.692: 93.8015% ( 86) 00:07:17.565 15123.692 - 15224.517: 94.0762% ( 32) 00:07:17.565 15224.517 - 15325.342: 94.2394% ( 19) 00:07:17.565 15325.342 - 15426.166: 94.3681% ( 15) 00:07:17.565 15426.166 - 15526.991: 94.4712% ( 12) 00:07:17.565 15526.991 - 15627.815: 94.6085% ( 16) 00:07:17.565 15627.815 - 15728.640: 94.7716% ( 19) 00:07:17.565 15728.640 - 15829.465: 94.9519% ( 21) 00:07:17.565 15829.465 - 15930.289: 95.0979% ( 17) 00:07:17.565 15930.289 - 16031.114: 95.4670% ( 43) 00:07:17.565 16031.114 - 16131.938: 95.7160% ( 29) 00:07:17.565 16131.938 - 16232.763: 95.9993% ( 33) 00:07:17.565 16232.763 - 16333.588: 96.3942% ( 46) 00:07:17.565 16333.588 - 16434.412: 96.6604% ( 31) 00:07:17.565 16434.412 - 16535.237: 96.9523% ( 34) 00:07:17.565 16535.237 - 16636.062: 97.2356% ( 33) 00:07:17.565 16636.062 - 16736.886: 97.4931% ( 30) 00:07:17.565 16736.886 - 16837.711: 97.7421% ( 29) 00:07:17.565 16837.711 - 16938.535: 97.9138% ( 20) 00:07:17.565 16938.535 - 17039.360: 98.0855% ( 20) 00:07:17.565 17039.360 - 17140.185: 98.4117% ( 38) 00:07:17.565 17140.185 - 17241.009: 98.5491% ( 16) 00:07:17.565 17241.009 - 17341.834: 98.6693% ( 14) 00:07:17.565 17341.834 - 17442.658: 98.7723% ( 12) 00:07:17.565 17442.658 - 17543.483: 98.8410% ( 8) 00:07:17.565 17543.483 - 17644.308: 98.8925% ( 6) 00:07:17.565 17644.308 - 17745.132: 98.9011% ( 1) 00:07:17.565 23391.311 - 23492.135: 98.9183% ( 2) 00:07:17.565 23492.135 - 23592.960: 98.9612% ( 5) 00:07:17.565 23592.960 - 23693.785: 98.9955% ( 4) 00:07:17.565 23693.785 - 23794.609: 99.0385% ( 5) 00:07:17.565 23794.609 - 23895.434: 99.0728% ( 4) 00:07:17.565 23895.434 - 23996.258: 99.1157% ( 5) 00:07:17.565 23996.258 - 24097.083: 99.1587% ( 5) 00:07:17.565 24097.083 - 24197.908: 99.1930% ( 4) 00:07:17.565 24197.908 - 24298.732: 99.2188% ( 3) 00:07:17.565 24298.732 - 24399.557: 99.2617% ( 5) 00:07:17.565 24399.557 - 24500.382: 99.2960% ( 4) 00:07:17.565 24500.382 - 24601.206: 99.3304% ( 4) 00:07:17.565 24601.206 - 24702.031: 99.3819% ( 6) 00:07:17.565 24702.031 - 24802.855: 99.4334% ( 6) 00:07:17.565 24802.855 - 24903.680: 99.4505% ( 2) 00:07:17.565 31457.280 - 31658.929: 99.4849% ( 4) 00:07:17.565 31658.929 - 31860.578: 99.5278% ( 5) 00:07:17.565 31860.578 - 32062.228: 99.5879% ( 7) 00:07:17.565 32062.228 - 32263.877: 99.6566% ( 8) 00:07:17.565 32263.877 - 32465.526: 99.7167% ( 7) 00:07:17.565 32465.526 - 32667.175: 99.7682% ( 6) 00:07:17.565 32667.175 - 32868.825: 99.8283% ( 7) 00:07:17.565 32868.825 - 33070.474: 99.8884% ( 7) 00:07:17.565 33070.474 - 33272.123: 99.9399% ( 6) 00:07:17.565 33272.123 - 33473.772: 100.0000% ( 7) 00:07:17.565 00:07:17.565 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:17.565 ============================================================================== 00:07:17.565 Range in us Cumulative IO count 00:07:17.565 4436.283 - 4461.489: 0.0429% ( 5) 00:07:17.565 4461.489 - 4486.695: 0.0944% ( 6) 00:07:17.565 4486.695 - 4511.902: 0.1631% ( 8) 00:07:17.565 4511.902 - 4537.108: 0.2146% ( 6) 00:07:17.565 4537.108 - 4562.314: 0.3177% ( 12) 00:07:17.565 4562.314 - 4587.520: 0.3606% ( 5) 00:07:17.565 4587.520 - 4612.726: 0.3777% ( 2) 00:07:17.565 4612.726 - 4637.932: 0.3863% ( 1) 00:07:17.565 4637.932 - 4663.138: 0.4121% ( 3) 00:07:17.565 4663.138 - 4688.345: 0.4293% ( 2) 00:07:17.565 4688.345 - 4713.551: 0.4464% ( 2) 00:07:17.565 4713.551 - 4738.757: 0.4636% ( 2) 00:07:17.565 4738.757 - 4763.963: 0.4808% ( 2) 00:07:17.565 4763.963 - 4789.169: 0.5065% ( 3) 00:07:17.565 4789.169 - 4814.375: 0.5237% ( 2) 00:07:17.565 4814.375 - 4839.582: 0.5409% ( 2) 00:07:17.565 4839.582 - 4864.788: 0.5495% ( 1) 00:07:17.565 6150.302 - 6175.508: 0.5580% ( 1) 00:07:17.565 6200.714 - 6225.920: 0.5838% ( 3) 00:07:17.565 6225.920 - 6251.126: 0.6267% ( 5) 00:07:17.565 6251.126 - 6276.332: 0.6696% ( 5) 00:07:17.565 6276.332 - 6301.538: 0.6954% ( 3) 00:07:17.565 6301.538 - 6326.745: 0.7469% ( 6) 00:07:17.565 6326.745 - 6351.951: 0.8070% ( 7) 00:07:17.565 6351.951 - 6377.157: 0.9444% ( 16) 00:07:17.565 6377.157 - 6402.363: 1.0130% ( 8) 00:07:17.565 6402.363 - 6427.569: 1.0989% ( 10) 00:07:17.565 6427.569 - 6452.775: 1.1762% ( 9) 00:07:17.565 6452.775 - 6503.188: 1.4251% ( 29) 00:07:17.565 6503.188 - 6553.600: 1.6741% ( 29) 00:07:17.565 6553.600 - 6604.012: 1.8716% ( 23) 00:07:17.565 6604.012 - 6654.425: 2.0089% ( 16) 00:07:17.565 6654.425 - 6704.837: 2.1291% ( 14) 00:07:17.565 6704.837 - 6755.249: 2.2150% ( 10) 00:07:17.565 6755.249 - 6805.662: 2.2665% ( 6) 00:07:17.565 6805.662 - 6856.074: 2.3266% ( 7) 00:07:17.565 6856.074 - 6906.486: 2.4983% ( 20) 00:07:17.565 6906.486 - 6956.898: 2.9447% ( 52) 00:07:17.565 6956.898 - 7007.311: 3.2795% ( 39) 00:07:17.565 7007.311 - 7057.723: 3.8462% ( 66) 00:07:17.565 7057.723 - 7108.135: 4.8420% ( 116) 00:07:17.565 7108.135 - 7158.548: 5.8637% ( 119) 00:07:17.565 7158.548 - 7208.960: 6.1813% ( 37) 00:07:17.565 7208.960 - 7259.372: 6.5076% ( 38) 00:07:17.565 7259.372 - 7309.785: 7.0227% ( 60) 00:07:17.565 7309.785 - 7360.197: 7.3317% ( 36) 00:07:17.565 7360.197 - 7410.609: 7.4863% ( 18) 00:07:17.565 7410.609 - 7461.022: 7.5635% ( 9) 00:07:17.565 7461.022 - 7511.434: 7.7009% ( 16) 00:07:17.565 7511.434 - 7561.846: 7.8383% ( 16) 00:07:17.565 7561.846 - 7612.258: 8.2160% ( 44) 00:07:17.565 7612.258 - 7662.671: 8.5422% ( 38) 00:07:17.565 7662.671 - 7713.083: 8.8170% ( 32) 00:07:17.565 7713.083 - 7763.495: 9.3063% ( 57) 00:07:17.565 7763.495 - 7813.908: 9.5553% ( 29) 00:07:17.565 7813.908 - 7864.320: 9.7613% ( 24) 00:07:17.565 7864.320 - 7914.732: 9.9330% ( 20) 00:07:17.565 7914.732 - 7965.145: 10.0704% ( 16) 00:07:17.565 7965.145 - 8015.557: 10.2163% ( 17) 00:07:17.565 8015.557 - 8065.969: 10.4739% ( 30) 00:07:17.565 8065.969 - 8116.382: 10.5941% ( 14) 00:07:17.565 8116.382 - 8166.794: 10.7486% ( 18) 00:07:17.565 8166.794 - 8217.206: 10.9375% ( 22) 00:07:17.565 8217.206 - 8267.618: 11.1521% ( 25) 00:07:17.565 8267.618 - 8318.031: 11.4612% ( 36) 00:07:17.565 8318.031 - 8368.443: 11.9076% ( 52) 00:07:17.565 8368.443 - 8418.855: 12.4141% ( 59) 00:07:17.565 8418.855 - 8469.268: 12.8005% ( 45) 00:07:17.565 8469.268 - 8519.680: 13.3070% ( 59) 00:07:17.565 8519.680 - 8570.092: 13.8221% ( 60) 00:07:17.565 8570.092 - 8620.505: 14.4317% ( 71) 00:07:17.565 8620.505 - 8670.917: 15.0584% ( 73) 00:07:17.565 8670.917 - 8721.329: 15.7194% ( 77) 00:07:17.565 8721.329 - 8771.742: 16.5608% ( 98) 00:07:17.565 8771.742 - 8822.154: 17.2905% ( 85) 00:07:17.565 8822.154 - 8872.566: 17.8486% ( 65) 00:07:17.566 8872.566 - 8922.978: 18.3293% ( 56) 00:07:17.566 8922.978 - 8973.391: 18.8444% ( 60) 00:07:17.566 8973.391 - 9023.803: 19.4368% ( 69) 00:07:17.566 9023.803 - 9074.215: 20.1322% ( 81) 00:07:17.566 9074.215 - 9124.628: 21.2397% ( 129) 00:07:17.566 9124.628 - 9175.040: 22.0295% ( 92) 00:07:17.566 9175.040 - 9225.452: 22.6562% ( 73) 00:07:17.566 9225.452 - 9275.865: 23.4633% ( 94) 00:07:17.566 9275.865 - 9326.277: 24.6308% ( 136) 00:07:17.566 9326.277 - 9376.689: 25.4035% ( 90) 00:07:17.566 9376.689 - 9427.102: 26.0989% ( 81) 00:07:17.566 9427.102 - 9477.514: 26.8115% ( 83) 00:07:17.566 9477.514 - 9527.926: 27.6872% ( 102) 00:07:17.566 9527.926 - 9578.338: 28.3225% ( 74) 00:07:17.566 9578.338 - 9628.751: 28.9578% ( 74) 00:07:17.566 9628.751 - 9679.163: 29.5501% ( 69) 00:07:17.566 9679.163 - 9729.575: 30.5632% ( 118) 00:07:17.566 9729.575 - 9779.988: 31.7479% ( 138) 00:07:17.566 9779.988 - 9830.400: 32.8039% ( 123) 00:07:17.566 9830.400 - 9880.812: 33.6538% ( 99) 00:07:17.566 9880.812 - 9931.225: 34.5553% ( 105) 00:07:17.566 9931.225 - 9981.637: 35.5941% ( 121) 00:07:17.566 9981.637 - 10032.049: 36.6758% ( 126) 00:07:17.566 10032.049 - 10082.462: 37.8863% ( 141) 00:07:17.566 10082.462 - 10132.874: 38.7105% ( 96) 00:07:17.566 10132.874 - 10183.286: 39.5948% ( 103) 00:07:17.566 10183.286 - 10233.698: 40.4619% ( 101) 00:07:17.566 10233.698 - 10284.111: 41.4148% ( 111) 00:07:17.566 10284.111 - 10334.523: 42.4365% ( 119) 00:07:17.566 10334.523 - 10384.935: 43.2778% ( 98) 00:07:17.566 10384.935 - 10435.348: 44.2136% ( 109) 00:07:17.566 10435.348 - 10485.760: 45.0464% ( 97) 00:07:17.566 10485.760 - 10536.172: 45.9049% ( 100) 00:07:17.566 10536.172 - 10586.585: 46.6861% ( 91) 00:07:17.566 10586.585 - 10636.997: 47.5704% ( 103) 00:07:17.566 10636.997 - 10687.409: 48.4117% ( 98) 00:07:17.566 10687.409 - 10737.822: 49.3304% ( 107) 00:07:17.566 10737.822 - 10788.234: 49.9914% ( 77) 00:07:17.566 10788.234 - 10838.646: 50.7641% ( 90) 00:07:17.566 10838.646 - 10889.058: 51.5883% ( 96) 00:07:17.566 10889.058 - 10939.471: 52.5755% ( 115) 00:07:17.566 10939.471 - 10989.883: 53.5886% ( 118) 00:07:17.566 10989.883 - 11040.295: 54.6961% ( 129) 00:07:17.566 11040.295 - 11090.708: 55.7349% ( 121) 00:07:17.566 11090.708 - 11141.120: 56.6535% ( 107) 00:07:17.566 11141.120 - 11191.532: 57.7610% ( 129) 00:07:17.566 11191.532 - 11241.945: 58.9629% ( 140) 00:07:17.566 11241.945 - 11292.357: 59.9416% ( 114) 00:07:17.566 11292.357 - 11342.769: 60.8087% ( 101) 00:07:17.566 11342.769 - 11393.182: 62.0192% ( 141) 00:07:17.566 11393.182 - 11443.594: 63.0580% ( 121) 00:07:17.566 11443.594 - 11494.006: 64.1569% ( 128) 00:07:17.566 11494.006 - 11544.418: 65.3674% ( 141) 00:07:17.566 11544.418 - 11594.831: 66.5093% ( 133) 00:07:17.566 11594.831 - 11645.243: 67.7112% ( 140) 00:07:17.566 11645.243 - 11695.655: 68.6041% ( 104) 00:07:17.566 11695.655 - 11746.068: 69.4712% ( 101) 00:07:17.566 11746.068 - 11796.480: 70.1322% ( 77) 00:07:17.566 11796.480 - 11846.892: 70.6645% ( 62) 00:07:17.566 11846.892 - 11897.305: 71.2740% ( 71) 00:07:17.566 11897.305 - 11947.717: 71.9609% ( 80) 00:07:17.566 11947.717 - 11998.129: 72.7335% ( 90) 00:07:17.566 11998.129 - 12048.542: 73.7981% ( 124) 00:07:17.566 12048.542 - 12098.954: 74.8541% ( 123) 00:07:17.566 12098.954 - 12149.366: 75.9873% ( 132) 00:07:17.566 12149.366 - 12199.778: 77.0089% ( 119) 00:07:17.566 12199.778 - 12250.191: 77.9104% ( 105) 00:07:17.566 12250.191 - 12300.603: 79.1724% ( 147) 00:07:17.566 12300.603 - 12351.015: 80.0137% ( 98) 00:07:17.566 12351.015 - 12401.428: 80.6319% ( 72) 00:07:17.566 12401.428 - 12451.840: 81.1041% ( 55) 00:07:17.566 12451.840 - 12502.252: 81.7222% ( 72) 00:07:17.566 12502.252 - 12552.665: 82.3231% ( 70) 00:07:17.566 12552.665 - 12603.077: 82.7867% ( 54) 00:07:17.566 12603.077 - 12653.489: 83.2160% ( 50) 00:07:17.566 12653.489 - 12703.902: 84.0230% ( 94) 00:07:17.566 12703.902 - 12754.314: 84.4179% ( 46) 00:07:17.566 12754.314 - 12804.726: 84.7699% ( 41) 00:07:17.566 12804.726 - 12855.138: 85.1391% ( 43) 00:07:17.566 12855.138 - 12905.551: 85.5340% ( 46) 00:07:17.566 12905.551 - 13006.375: 86.0663% ( 62) 00:07:17.566 13006.375 - 13107.200: 86.5041% ( 51) 00:07:17.566 13107.200 - 13208.025: 87.2339% ( 85) 00:07:17.566 13208.025 - 13308.849: 87.8434% ( 71) 00:07:17.566 13308.849 - 13409.674: 88.4787% ( 74) 00:07:17.566 13409.674 - 13510.498: 88.9509% ( 55) 00:07:17.566 13510.498 - 13611.323: 89.3286% ( 44) 00:07:17.566 13611.323 - 13712.148: 89.5604% ( 27) 00:07:17.566 13712.148 - 13812.972: 89.8523% ( 34) 00:07:17.566 13812.972 - 13913.797: 90.1099% ( 30) 00:07:17.566 13913.797 - 14014.622: 90.2730% ( 19) 00:07:17.566 14014.622 - 14115.446: 90.5048% ( 27) 00:07:17.566 14115.446 - 14216.271: 90.6765% ( 20) 00:07:17.566 14216.271 - 14317.095: 90.8654% ( 22) 00:07:17.566 14317.095 - 14417.920: 91.0628% ( 23) 00:07:17.566 14417.920 - 14518.745: 91.2689% ( 24) 00:07:17.566 14518.745 - 14619.569: 91.6123% ( 40) 00:07:17.566 14619.569 - 14720.394: 91.9214% ( 36) 00:07:17.566 14720.394 - 14821.218: 92.2734% ( 41) 00:07:17.566 14821.218 - 14922.043: 92.8228% ( 64) 00:07:17.566 14922.043 - 15022.868: 93.0117% ( 22) 00:07:17.566 15022.868 - 15123.692: 93.2606% ( 29) 00:07:17.566 15123.692 - 15224.517: 93.5354% ( 32) 00:07:17.566 15224.517 - 15325.342: 93.7586% ( 26) 00:07:17.566 15325.342 - 15426.166: 94.0762% ( 37) 00:07:17.566 15426.166 - 15526.991: 94.1793% ( 12) 00:07:17.566 15526.991 - 15627.815: 94.3510% ( 20) 00:07:17.566 15627.815 - 15728.640: 94.5312% ( 21) 00:07:17.566 15728.640 - 15829.465: 94.7888% ( 30) 00:07:17.566 15829.465 - 15930.289: 95.0549% ( 31) 00:07:17.566 15930.289 - 16031.114: 95.4499% ( 46) 00:07:17.566 16031.114 - 16131.938: 96.0766% ( 73) 00:07:17.566 16131.938 - 16232.763: 96.3427% ( 31) 00:07:17.566 16232.763 - 16333.588: 96.5745% ( 27) 00:07:17.566 16333.588 - 16434.412: 96.7891% ( 25) 00:07:17.566 16434.412 - 16535.237: 97.0124% ( 26) 00:07:17.566 16535.237 - 16636.062: 97.3214% ( 36) 00:07:17.566 16636.062 - 16736.886: 97.5876% ( 31) 00:07:17.566 16736.886 - 16837.711: 97.7936% ( 24) 00:07:17.566 16837.711 - 16938.535: 97.9481% ( 18) 00:07:17.566 16938.535 - 17039.360: 98.0855% ( 16) 00:07:17.566 17039.360 - 17140.185: 98.1971% ( 13) 00:07:17.566 17140.185 - 17241.009: 98.4289% ( 27) 00:07:17.566 17241.009 - 17341.834: 98.7294% ( 35) 00:07:17.566 17341.834 - 17442.658: 98.8582% ( 15) 00:07:17.566 17442.658 - 17543.483: 98.8925% ( 4) 00:07:17.566 17745.132 - 17845.957: 98.9011% ( 1) 00:07:17.566 23088.837 - 23189.662: 98.9269% ( 3) 00:07:17.566 23189.662 - 23290.486: 98.9784% ( 6) 00:07:17.566 23290.486 - 23391.311: 99.0385% ( 7) 00:07:17.566 23391.311 - 23492.135: 99.0986% ( 7) 00:07:17.566 23492.135 - 23592.960: 99.1501% ( 6) 00:07:17.566 23592.960 - 23693.785: 99.2102% ( 7) 00:07:17.566 23693.785 - 23794.609: 99.2703% ( 7) 00:07:17.566 23794.609 - 23895.434: 99.3218% ( 6) 00:07:17.566 23895.434 - 23996.258: 99.3733% ( 6) 00:07:17.566 23996.258 - 24097.083: 99.4162% ( 5) 00:07:17.566 24097.083 - 24197.908: 99.4505% ( 4) 00:07:17.566 31860.578 - 32062.228: 99.5106% ( 7) 00:07:17.566 32062.228 - 32263.877: 99.5793% ( 8) 00:07:17.566 32263.877 - 32465.526: 99.6566% ( 9) 00:07:17.566 32465.526 - 32667.175: 99.7339% ( 9) 00:07:17.566 32667.175 - 32868.825: 99.8111% ( 9) 00:07:17.566 32868.825 - 33070.474: 99.8884% ( 9) 00:07:17.566 33070.474 - 33272.123: 99.9657% ( 9) 00:07:17.566 33272.123 - 33473.772: 100.0000% ( 4) 00:07:17.566 00:07:17.566 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:17.566 ============================================================================== 00:07:17.566 Range in us Cumulative IO count 00:07:17.566 4108.603 - 4133.809: 0.0258% ( 3) 00:07:17.566 4133.809 - 4159.015: 0.0859% ( 7) 00:07:17.566 4159.015 - 4184.222: 0.1374% ( 6) 00:07:17.566 4184.222 - 4209.428: 0.1889% ( 6) 00:07:17.566 4209.428 - 4234.634: 0.3091% ( 14) 00:07:17.567 4234.634 - 4259.840: 0.3520% ( 5) 00:07:17.567 4259.840 - 4285.046: 0.3692% ( 2) 00:07:17.567 4285.046 - 4310.252: 0.3949% ( 3) 00:07:17.567 4310.252 - 4335.458: 0.4121% ( 2) 00:07:17.567 4335.458 - 4360.665: 0.4293% ( 2) 00:07:17.567 4360.665 - 4385.871: 0.4464% ( 2) 00:07:17.567 4385.871 - 4411.077: 0.4636% ( 2) 00:07:17.567 4411.077 - 4436.283: 0.4808% ( 2) 00:07:17.567 4436.283 - 4461.489: 0.4979% ( 2) 00:07:17.567 4461.489 - 4486.695: 0.5151% ( 2) 00:07:17.567 4486.695 - 4511.902: 0.5409% ( 3) 00:07:17.567 4511.902 - 4537.108: 0.5495% ( 1) 00:07:17.567 6125.095 - 6150.302: 0.5580% ( 1) 00:07:17.567 6175.508 - 6200.714: 0.5666% ( 1) 00:07:17.567 6225.920 - 6251.126: 0.5752% ( 1) 00:07:17.567 6251.126 - 6276.332: 0.6095% ( 4) 00:07:17.567 6276.332 - 6301.538: 0.6611% ( 6) 00:07:17.567 6301.538 - 6326.745: 0.7297% ( 8) 00:07:17.567 6326.745 - 6351.951: 0.8156% ( 10) 00:07:17.567 6351.951 - 6377.157: 0.8843% ( 8) 00:07:17.567 6377.157 - 6402.363: 0.9530% ( 8) 00:07:17.567 6402.363 - 6427.569: 1.1676% ( 25) 00:07:17.567 6427.569 - 6452.775: 1.3908% ( 26) 00:07:17.567 6452.775 - 6503.188: 1.5797% ( 22) 00:07:17.567 6503.188 - 6553.600: 1.8029% ( 26) 00:07:17.567 6553.600 - 6604.012: 2.0175% ( 25) 00:07:17.567 6604.012 - 6654.425: 2.1120% ( 11) 00:07:17.567 6654.425 - 6704.837: 2.2064% ( 11) 00:07:17.567 6704.837 - 6755.249: 2.2922% ( 10) 00:07:17.567 6755.249 - 6805.662: 2.3695% ( 9) 00:07:17.567 6805.662 - 6856.074: 2.5155% ( 17) 00:07:17.567 6856.074 - 6906.486: 2.6872% ( 20) 00:07:17.567 6906.486 - 6956.898: 2.9619% ( 32) 00:07:17.567 6956.898 - 7007.311: 3.3482% ( 45) 00:07:17.567 7007.311 - 7057.723: 4.1981% ( 99) 00:07:17.567 7057.723 - 7108.135: 5.0996% ( 105) 00:07:17.567 7108.135 - 7158.548: 5.8036% ( 82) 00:07:17.567 7158.548 - 7208.960: 6.3874% ( 68) 00:07:17.567 7208.960 - 7259.372: 6.9626% ( 67) 00:07:17.567 7259.372 - 7309.785: 7.5120% ( 64) 00:07:17.567 7309.785 - 7360.197: 7.8039% ( 34) 00:07:17.567 7360.197 - 7410.609: 7.9670% ( 19) 00:07:17.567 7410.609 - 7461.022: 8.0786% ( 13) 00:07:17.567 7461.022 - 7511.434: 8.1731% ( 11) 00:07:17.567 7511.434 - 7561.846: 8.2675% ( 11) 00:07:17.567 7561.846 - 7612.258: 8.3276% ( 7) 00:07:17.567 7612.258 - 7662.671: 8.4049% ( 9) 00:07:17.567 7662.671 - 7713.083: 8.4907% ( 10) 00:07:17.567 7713.083 - 7763.495: 8.6968% ( 24) 00:07:17.567 7763.495 - 7813.908: 8.8513% ( 18) 00:07:17.567 7813.908 - 7864.320: 9.0573% ( 24) 00:07:17.567 7864.320 - 7914.732: 9.2634% ( 24) 00:07:17.567 7914.732 - 7965.145: 9.5209% ( 30) 00:07:17.567 7965.145 - 8015.557: 9.8644% ( 40) 00:07:17.567 8015.557 - 8065.969: 10.1648% ( 35) 00:07:17.567 8065.969 - 8116.382: 10.3280% ( 19) 00:07:17.567 8116.382 - 8166.794: 10.4997% ( 20) 00:07:17.567 8166.794 - 8217.206: 10.8516% ( 41) 00:07:17.567 8217.206 - 8267.618: 10.9890% ( 16) 00:07:17.567 8267.618 - 8318.031: 11.1178% ( 15) 00:07:17.567 8318.031 - 8368.443: 11.3668% ( 29) 00:07:17.567 8368.443 - 8418.855: 11.6587% ( 34) 00:07:17.567 8418.855 - 8469.268: 12.0364% ( 44) 00:07:17.567 8469.268 - 8519.680: 12.6803% ( 75) 00:07:17.567 8519.680 - 8570.092: 13.4444% ( 89) 00:07:17.567 8570.092 - 8620.505: 14.1312% ( 80) 00:07:17.567 8620.505 - 8670.917: 14.8008% ( 78) 00:07:17.567 8670.917 - 8721.329: 15.5907% ( 92) 00:07:17.567 8721.329 - 8771.742: 16.1144% ( 61) 00:07:17.567 8771.742 - 8822.154: 16.7153% ( 70) 00:07:17.567 8822.154 - 8872.566: 17.4107% ( 81) 00:07:17.567 8872.566 - 8922.978: 18.2005% ( 92) 00:07:17.567 8922.978 - 8973.391: 18.7929% ( 69) 00:07:17.567 8973.391 - 9023.803: 19.6514% ( 100) 00:07:17.567 9023.803 - 9074.215: 20.4670% ( 95) 00:07:17.567 9074.215 - 9124.628: 21.3685% ( 105) 00:07:17.567 9124.628 - 9175.040: 22.2871% ( 107) 00:07:17.567 9175.040 - 9225.452: 23.0512% ( 89) 00:07:17.567 9225.452 - 9275.865: 24.1071% ( 123) 00:07:17.567 9275.865 - 9326.277: 24.9141% ( 94) 00:07:17.567 9326.277 - 9376.689: 25.7555% ( 98) 00:07:17.567 9376.689 - 9427.102: 26.8029% ( 122) 00:07:17.567 9427.102 - 9477.514: 27.6528% ( 99) 00:07:17.567 9477.514 - 9527.926: 28.3482% ( 81) 00:07:17.567 9527.926 - 9578.338: 29.1295% ( 91) 00:07:17.567 9578.338 - 9628.751: 29.6617% ( 62) 00:07:17.567 9628.751 - 9679.163: 30.2198% ( 65) 00:07:17.567 9679.163 - 9729.575: 30.7864% ( 66) 00:07:17.567 9729.575 - 9779.988: 31.5161% ( 85) 00:07:17.567 9779.988 - 9830.400: 32.1343% ( 72) 00:07:17.567 9830.400 - 9880.812: 33.1130% ( 114) 00:07:17.567 9880.812 - 9931.225: 33.7998% ( 80) 00:07:17.567 9931.225 - 9981.637: 34.5124% ( 83) 00:07:17.567 9981.637 - 10032.049: 35.5598% ( 122) 00:07:17.567 10032.049 - 10082.462: 36.5385% ( 114) 00:07:17.567 10082.462 - 10132.874: 37.4141% ( 102) 00:07:17.567 10132.874 - 10183.286: 38.3929% ( 114) 00:07:17.567 10183.286 - 10233.698: 39.5862% ( 139) 00:07:17.567 10233.698 - 10284.111: 40.8396% ( 146) 00:07:17.567 10284.111 - 10334.523: 42.0330% ( 139) 00:07:17.567 10334.523 - 10384.935: 43.2005% ( 136) 00:07:17.567 10384.935 - 10435.348: 44.2909% ( 127) 00:07:17.567 10435.348 - 10485.760: 45.6302% ( 156) 00:07:17.567 10485.760 - 10536.172: 46.9609% ( 155) 00:07:17.567 10536.172 - 10586.585: 48.1198% ( 135) 00:07:17.567 10586.585 - 10636.997: 49.0127% ( 104) 00:07:17.567 10636.997 - 10687.409: 49.8111% ( 93) 00:07:17.567 10687.409 - 10737.822: 50.6095% ( 93) 00:07:17.567 10737.822 - 10788.234: 51.4852% ( 102) 00:07:17.567 10788.234 - 10838.646: 52.4983% ( 118) 00:07:17.567 10838.646 - 10889.058: 53.3568% ( 100) 00:07:17.567 10889.058 - 10939.471: 54.4815% ( 131) 00:07:17.567 10939.471 - 10989.883: 55.4688% ( 115) 00:07:17.567 10989.883 - 11040.295: 56.2586% ( 92) 00:07:17.567 11040.295 - 11090.708: 57.0656% ( 94) 00:07:17.567 11090.708 - 11141.120: 57.8383% ( 90) 00:07:17.567 11141.120 - 11191.532: 58.5680% ( 85) 00:07:17.567 11191.532 - 11241.945: 59.2634% ( 81) 00:07:17.567 11241.945 - 11292.357: 60.0962% ( 97) 00:07:17.567 11292.357 - 11342.769: 60.9633% ( 101) 00:07:17.567 11342.769 - 11393.182: 61.9162% ( 111) 00:07:17.567 11393.182 - 11443.594: 62.8949% ( 114) 00:07:17.567 11443.594 - 11494.006: 63.7792% ( 103) 00:07:17.567 11494.006 - 11544.418: 64.6034% ( 96) 00:07:17.567 11544.418 - 11594.831: 65.3417% ( 86) 00:07:17.567 11594.831 - 11645.243: 66.4062% ( 124) 00:07:17.567 11645.243 - 11695.655: 67.4107% ( 117) 00:07:17.567 11695.655 - 11746.068: 68.5096% ( 128) 00:07:17.567 11746.068 - 11796.480: 69.6257% ( 130) 00:07:17.567 11796.480 - 11846.892: 70.9049% ( 149) 00:07:17.567 11846.892 - 11897.305: 71.8492% ( 110) 00:07:17.567 11897.305 - 11947.717: 72.8280% ( 114) 00:07:17.567 11947.717 - 11998.129: 73.6264% ( 93) 00:07:17.567 11998.129 - 12048.542: 74.2102% ( 68) 00:07:17.567 12048.542 - 12098.954: 74.8970% ( 80) 00:07:17.567 12098.954 - 12149.366: 75.5580% ( 77) 00:07:17.567 12149.366 - 12199.778: 76.3565% ( 93) 00:07:17.567 12199.778 - 12250.191: 77.0690% ( 83) 00:07:17.567 12250.191 - 12300.603: 77.7644% ( 81) 00:07:17.567 12300.603 - 12351.015: 78.5457% ( 91) 00:07:17.567 12351.015 - 12401.428: 79.4128% ( 101) 00:07:17.567 12401.428 - 12451.840: 80.2284% ( 95) 00:07:17.567 12451.840 - 12502.252: 80.8551% ( 73) 00:07:17.567 12502.252 - 12552.665: 81.6020% ( 87) 00:07:17.567 12552.665 - 12603.077: 82.2888% ( 80) 00:07:17.567 12603.077 - 12653.489: 82.9584% ( 78) 00:07:17.567 12653.489 - 12703.902: 83.5680% ( 71) 00:07:17.567 12703.902 - 12754.314: 84.1260% ( 65) 00:07:17.567 12754.314 - 12804.726: 84.6497% ( 61) 00:07:17.567 12804.726 - 12855.138: 85.1391% ( 57) 00:07:17.567 12855.138 - 12905.551: 85.5426% ( 47) 00:07:17.567 12905.551 - 13006.375: 86.1865% ( 75) 00:07:17.567 13006.375 - 13107.200: 86.6071% ( 49) 00:07:17.567 13107.200 - 13208.025: 86.9420% ( 39) 00:07:17.567 13208.025 - 13308.849: 87.2081% ( 31) 00:07:17.567 13308.849 - 13409.674: 87.5343% ( 38) 00:07:17.567 13409.674 - 13510.498: 88.1611% ( 73) 00:07:17.567 13510.498 - 13611.323: 88.6247% ( 54) 00:07:17.567 13611.323 - 13712.148: 88.9852% ( 42) 00:07:17.567 13712.148 - 13812.972: 89.4746% ( 57) 00:07:17.567 13812.972 - 13913.797: 89.9983% ( 61) 00:07:17.567 13913.797 - 14014.622: 90.4447% ( 52) 00:07:17.567 14014.622 - 14115.446: 90.9083% ( 54) 00:07:17.567 14115.446 - 14216.271: 91.3633% ( 53) 00:07:17.567 14216.271 - 14317.095: 91.6552% ( 34) 00:07:17.567 14317.095 - 14417.920: 91.8012% ( 17) 00:07:17.567 14417.920 - 14518.745: 91.9900% ( 22) 00:07:17.567 14518.745 - 14619.569: 92.1961% ( 24) 00:07:17.567 14619.569 - 14720.394: 92.3506% ( 18) 00:07:17.567 14720.394 - 14821.218: 92.5996% ( 29) 00:07:17.567 14821.218 - 14922.043: 92.8829% ( 33) 00:07:17.567 14922.043 - 15022.868: 93.2349% ( 41) 00:07:17.567 15022.868 - 15123.692: 93.5697% ( 39) 00:07:17.567 15123.692 - 15224.517: 93.9131% ( 40) 00:07:17.567 15224.517 - 15325.342: 94.2565% ( 40) 00:07:17.567 15325.342 - 15426.166: 94.5484% ( 34) 00:07:17.567 15426.166 - 15526.991: 94.9004% ( 41) 00:07:17.567 15526.991 - 15627.815: 95.2095% ( 36) 00:07:17.567 15627.815 - 15728.640: 95.4842% ( 32) 00:07:17.567 15728.640 - 15829.465: 95.8362% ( 41) 00:07:17.567 15829.465 - 15930.289: 96.1968% ( 42) 00:07:17.567 15930.289 - 16031.114: 96.6174% ( 49) 00:07:17.567 16031.114 - 16131.938: 96.7977% ( 21) 00:07:17.567 16131.938 - 16232.763: 96.9780% ( 21) 00:07:17.567 16232.763 - 16333.588: 97.1154% ( 16) 00:07:17.567 16333.588 - 16434.412: 97.2270% ( 13) 00:07:17.567 16434.412 - 16535.237: 97.3128% ( 10) 00:07:17.567 16535.237 - 16636.062: 97.4073% ( 11) 00:07:17.567 16636.062 - 16736.886: 97.5361% ( 15) 00:07:17.568 16736.886 - 16837.711: 97.6906% ( 18) 00:07:17.568 16837.711 - 16938.535: 97.8623% ( 20) 00:07:17.568 16938.535 - 17039.360: 98.0168% ( 18) 00:07:17.568 17039.360 - 17140.185: 98.1542% ( 16) 00:07:17.568 17140.185 - 17241.009: 98.3431% ( 22) 00:07:17.568 17241.009 - 17341.834: 98.4633% ( 14) 00:07:17.568 17341.834 - 17442.658: 98.5749% ( 13) 00:07:17.568 17442.658 - 17543.483: 98.6865% ( 13) 00:07:17.568 17543.483 - 17644.308: 98.7981% ( 13) 00:07:17.568 17644.308 - 17745.132: 98.8753% ( 9) 00:07:17.568 17745.132 - 17845.957: 98.9011% ( 3) 00:07:17.568 23290.486 - 23391.311: 98.9183% ( 2) 00:07:17.568 23391.311 - 23492.135: 98.9354% ( 2) 00:07:17.568 23492.135 - 23592.960: 98.9784% ( 5) 00:07:17.568 23592.960 - 23693.785: 99.0213% ( 5) 00:07:17.568 23693.785 - 23794.609: 99.0385% ( 2) 00:07:17.568 23794.609 - 23895.434: 99.0728% ( 4) 00:07:17.568 23895.434 - 23996.258: 99.1157% ( 5) 00:07:17.568 23996.258 - 24097.083: 99.1501% ( 4) 00:07:17.568 24097.083 - 24197.908: 99.1930% ( 5) 00:07:17.568 24197.908 - 24298.732: 99.2273% ( 4) 00:07:17.568 24298.732 - 24399.557: 99.2703% ( 5) 00:07:17.568 24399.557 - 24500.382: 99.3046% ( 4) 00:07:17.568 24500.382 - 24601.206: 99.3389% ( 4) 00:07:17.568 24601.206 - 24702.031: 99.3819% ( 5) 00:07:17.568 24702.031 - 24802.855: 99.4162% ( 4) 00:07:17.568 24802.855 - 24903.680: 99.4505% ( 4) 00:07:17.568 31658.929 - 31860.578: 99.5021% ( 6) 00:07:17.568 31860.578 - 32062.228: 99.5707% ( 8) 00:07:17.568 32062.228 - 32263.877: 99.6480% ( 9) 00:07:17.568 32263.877 - 32465.526: 99.7081% ( 7) 00:07:17.568 32465.526 - 32667.175: 99.7854% ( 9) 00:07:17.568 32667.175 - 32868.825: 99.8626% ( 9) 00:07:17.568 32868.825 - 33070.474: 99.9399% ( 9) 00:07:17.568 33070.474 - 33272.123: 100.0000% ( 7) 00:07:17.568 00:07:17.568 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:17.568 ============================================================================== 00:07:17.568 Range in us Cumulative IO count 00:07:17.568 3806.129 - 3831.335: 0.0342% ( 4) 00:07:17.568 3831.335 - 3856.542: 0.0768% ( 5) 00:07:17.568 3856.542 - 3881.748: 0.1537% ( 9) 00:07:17.568 3881.748 - 3906.954: 0.2391% ( 10) 00:07:17.568 3906.954 - 3932.160: 0.3245% ( 10) 00:07:17.568 3932.160 - 3957.366: 0.3586% ( 4) 00:07:17.568 3957.366 - 3982.572: 0.3842% ( 3) 00:07:17.568 3982.572 - 4007.778: 0.4013% ( 2) 00:07:17.568 4007.778 - 4032.985: 0.4184% ( 2) 00:07:17.568 4032.985 - 4058.191: 0.4355% ( 2) 00:07:17.568 4058.191 - 4083.397: 0.4525% ( 2) 00:07:17.568 4083.397 - 4108.603: 0.4696% ( 2) 00:07:17.568 4108.603 - 4133.809: 0.4867% ( 2) 00:07:17.568 4133.809 - 4159.015: 0.5038% ( 2) 00:07:17.568 4159.015 - 4184.222: 0.5294% ( 3) 00:07:17.568 4184.222 - 4209.428: 0.5464% ( 2) 00:07:17.568 6200.714 - 6225.920: 0.5550% ( 1) 00:07:17.568 6251.126 - 6276.332: 0.5635% ( 1) 00:07:17.568 6276.332 - 6301.538: 0.5891% ( 3) 00:07:17.568 6301.538 - 6326.745: 0.6233% ( 4) 00:07:17.568 6326.745 - 6351.951: 0.6745% ( 6) 00:07:17.568 6351.951 - 6377.157: 0.7172% ( 5) 00:07:17.568 6377.157 - 6402.363: 0.8624% ( 17) 00:07:17.568 6402.363 - 6427.569: 1.0161% ( 18) 00:07:17.568 6427.569 - 6452.775: 1.0673% ( 6) 00:07:17.568 6452.775 - 6503.188: 1.2380% ( 20) 00:07:17.568 6503.188 - 6553.600: 1.5710% ( 39) 00:07:17.568 6553.600 - 6604.012: 1.9638% ( 46) 00:07:17.568 6604.012 - 6654.425: 2.3480% ( 45) 00:07:17.568 6654.425 - 6704.837: 2.4761% ( 15) 00:07:17.568 6704.837 - 6755.249: 2.5956% ( 14) 00:07:17.568 6755.249 - 6805.662: 2.6639% ( 8) 00:07:17.568 6805.662 - 6856.074: 2.7408% ( 9) 00:07:17.568 6856.074 - 6906.486: 2.8859% ( 17) 00:07:17.568 6906.486 - 6956.898: 3.1506% ( 31) 00:07:17.568 6956.898 - 7007.311: 3.4495% ( 35) 00:07:17.568 7007.311 - 7057.723: 4.3887% ( 110) 00:07:17.568 7057.723 - 7108.135: 5.5669% ( 138) 00:07:17.568 7108.135 - 7158.548: 5.9085% ( 40) 00:07:17.568 7158.548 - 7208.960: 6.7708% ( 101) 00:07:17.568 7208.960 - 7259.372: 7.3856% ( 72) 00:07:17.568 7259.372 - 7309.785: 7.6588% ( 32) 00:07:17.568 7309.785 - 7360.197: 7.8723% ( 25) 00:07:17.568 7360.197 - 7410.609: 8.0345% ( 19) 00:07:17.568 7410.609 - 7461.022: 8.1796% ( 17) 00:07:17.568 7461.022 - 7511.434: 8.2565% ( 9) 00:07:17.568 7511.434 - 7561.846: 8.2650% ( 1) 00:07:17.568 7561.846 - 7612.258: 8.2992% ( 4) 00:07:17.568 7612.258 - 7662.671: 8.4443% ( 17) 00:07:17.568 7662.671 - 7713.083: 8.6236% ( 21) 00:07:17.568 7713.083 - 7763.495: 8.7346% ( 13) 00:07:17.568 7763.495 - 7813.908: 8.8029% ( 8) 00:07:17.568 7813.908 - 7864.320: 8.9139% ( 13) 00:07:17.568 7864.320 - 7914.732: 9.0249% ( 13) 00:07:17.568 7914.732 - 7965.145: 9.1615% ( 16) 00:07:17.568 7965.145 - 8015.557: 9.5031% ( 40) 00:07:17.568 8015.557 - 8065.969: 9.6226% ( 14) 00:07:17.568 8065.969 - 8116.382: 9.7080% ( 10) 00:07:17.568 8116.382 - 8166.794: 9.8446% ( 16) 00:07:17.568 8166.794 - 8217.206: 10.0324% ( 22) 00:07:17.568 8217.206 - 8267.618: 10.3313% ( 35) 00:07:17.568 8267.618 - 8318.031: 10.8436% ( 60) 00:07:17.568 8318.031 - 8368.443: 11.2022% ( 42) 00:07:17.568 8368.443 - 8418.855: 11.7999% ( 70) 00:07:17.568 8418.855 - 8469.268: 12.3890% ( 69) 00:07:17.568 8469.268 - 8519.680: 12.9525% ( 66) 00:07:17.568 8519.680 - 8570.092: 13.7722% ( 96) 00:07:17.568 8570.092 - 8620.505: 14.6773% ( 106) 00:07:17.568 8620.505 - 8670.917: 15.3176% ( 75) 00:07:17.568 8670.917 - 8721.329: 15.8385% ( 61) 00:07:17.568 8721.329 - 8771.742: 16.4276% ( 69) 00:07:17.568 8771.742 - 8822.154: 17.3668% ( 110) 00:07:17.568 8822.154 - 8872.566: 17.9389% ( 67) 00:07:17.568 8872.566 - 8922.978: 18.7756% ( 98) 00:07:17.568 8922.978 - 8973.391: 19.6636% ( 104) 00:07:17.568 8973.391 - 9023.803: 20.3893% ( 85) 00:07:17.568 9023.803 - 9074.215: 21.2432% ( 100) 00:07:17.568 9074.215 - 9124.628: 22.0458% ( 94) 00:07:17.568 9124.628 - 9175.040: 22.7630% ( 84) 00:07:17.568 9175.040 - 9225.452: 23.5656% ( 94) 00:07:17.568 9225.452 - 9275.865: 24.1718% ( 71) 00:07:17.568 9275.865 - 9326.277: 24.7268% ( 65) 00:07:17.568 9326.277 - 9376.689: 25.3928% ( 78) 00:07:17.568 9376.689 - 9427.102: 26.0075% ( 72) 00:07:17.568 9427.102 - 9477.514: 26.4942% ( 57) 00:07:17.568 9477.514 - 9527.926: 26.9126% ( 49) 00:07:17.568 9527.926 - 9578.338: 27.3139% ( 47) 00:07:17.568 9578.338 - 9628.751: 27.7066% ( 46) 00:07:17.568 9628.751 - 9679.163: 28.6544% ( 111) 00:07:17.568 9679.163 - 9729.575: 29.3033% ( 76) 00:07:17.568 9729.575 - 9779.988: 30.1486% ( 99) 00:07:17.568 9779.988 - 9830.400: 31.0109% ( 101) 00:07:17.568 9830.400 - 9880.812: 31.8477% ( 98) 00:07:17.568 9880.812 - 9931.225: 32.8552% ( 118) 00:07:17.568 9931.225 - 9981.637: 34.0932% ( 145) 00:07:17.568 9981.637 - 10032.049: 35.1520% ( 124) 00:07:17.568 10032.049 - 10082.462: 36.3217% ( 137) 00:07:17.568 10082.462 - 10132.874: 37.4658% ( 134) 00:07:17.568 10132.874 - 10183.286: 38.5844% ( 131) 00:07:17.568 10183.286 - 10233.698: 39.7712% ( 139) 00:07:17.568 10233.698 - 10284.111: 41.2739% ( 176) 00:07:17.568 10284.111 - 10334.523: 42.4949% ( 143) 00:07:17.568 10334.523 - 10384.935: 43.7842% ( 151) 00:07:17.568 10384.935 - 10435.348: 45.1417% ( 159) 00:07:17.568 10435.348 - 10485.760: 46.1834% ( 122) 00:07:17.568 10485.760 - 10536.172: 47.3702% ( 139) 00:07:17.568 10536.172 - 10586.585: 48.3265% ( 112) 00:07:17.568 10586.585 - 10636.997: 49.2230% ( 105) 00:07:17.568 10636.997 - 10687.409: 50.1793% ( 112) 00:07:17.568 10687.409 - 10737.822: 51.1270% ( 111) 00:07:17.568 10737.822 - 10788.234: 51.8699% ( 87) 00:07:17.568 10788.234 - 10838.646: 52.5786% ( 83) 00:07:17.568 10838.646 - 10889.058: 53.1592% ( 68) 00:07:17.568 10889.058 - 10939.471: 53.6544% ( 58) 00:07:17.568 10939.471 - 10989.883: 54.3204% ( 78) 00:07:17.568 10989.883 - 11040.295: 55.2510% ( 109) 00:07:17.568 11040.295 - 11090.708: 56.2158% ( 113) 00:07:17.568 11090.708 - 11141.120: 57.2319% ( 119) 00:07:17.568 11141.120 - 11191.532: 58.6066% ( 161) 00:07:17.568 11191.532 - 11241.945: 59.7848% ( 138) 00:07:17.568 11241.945 - 11292.357: 60.8777% ( 128) 00:07:17.568 11292.357 - 11342.769: 62.0048% ( 132) 00:07:17.568 11342.769 - 11393.182: 62.9098% ( 106) 00:07:17.568 11393.182 - 11443.594: 63.7637% ( 100) 00:07:17.568 11443.594 - 11494.006: 64.8309% ( 125) 00:07:17.568 11494.006 - 11544.418: 65.6250% ( 93) 00:07:17.568 11544.418 - 11594.831: 66.4617% ( 98) 00:07:17.568 11594.831 - 11645.243: 67.1960% ( 86) 00:07:17.568 11645.243 - 11695.655: 68.0157% ( 96) 00:07:17.568 11695.655 - 11746.068: 68.8268% ( 95) 00:07:17.568 11746.068 - 11796.480: 69.5782% ( 88) 00:07:17.568 11796.480 - 11846.892: 70.4918% ( 107) 00:07:17.568 11846.892 - 11897.305: 71.4481% ( 112) 00:07:17.568 11897.305 - 11947.717: 72.6691% ( 143) 00:07:17.568 11947.717 - 11998.129: 73.4802% ( 95) 00:07:17.568 11998.129 - 12048.542: 74.3084% ( 97) 00:07:17.568 12048.542 - 12098.954: 75.3415% ( 121) 00:07:17.568 12098.954 - 12149.366: 76.0587% ( 84) 00:07:17.568 12149.366 - 12199.778: 76.8357% ( 91) 00:07:17.568 12199.778 - 12250.191: 77.8091% ( 114) 00:07:17.568 12250.191 - 12300.603: 78.6885% ( 103) 00:07:17.568 12300.603 - 12351.015: 79.3630% ( 79) 00:07:17.568 12351.015 - 12401.428: 80.0205% ( 77) 00:07:17.568 12401.428 - 12451.840: 80.6609% ( 75) 00:07:17.568 12451.840 - 12502.252: 81.3183% ( 77) 00:07:17.568 12502.252 - 12552.665: 81.7794% ( 54) 00:07:17.568 12552.665 - 12603.077: 82.2490% ( 55) 00:07:17.568 12603.077 - 12653.489: 82.5734% ( 38) 00:07:17.568 12653.489 - 12703.902: 82.9491% ( 44) 00:07:17.568 12703.902 - 12754.314: 83.4443% ( 58) 00:07:17.568 12754.314 - 12804.726: 83.8115% ( 43) 00:07:17.569 12804.726 - 12855.138: 84.1701% ( 42) 00:07:17.569 12855.138 - 12905.551: 84.5287% ( 42) 00:07:17.569 12905.551 - 13006.375: 85.1947% ( 78) 00:07:17.569 13006.375 - 13107.200: 85.6216% ( 50) 00:07:17.569 13107.200 - 13208.025: 86.0229% ( 47) 00:07:17.569 13208.025 - 13308.849: 86.3986% ( 44) 00:07:17.569 13308.849 - 13409.674: 87.0902% ( 81) 00:07:17.569 13409.674 - 13510.498: 87.6281% ( 63) 00:07:17.569 13510.498 - 13611.323: 88.0038% ( 44) 00:07:17.569 13611.323 - 13712.148: 88.3111% ( 36) 00:07:17.569 13712.148 - 13812.972: 88.7039% ( 46) 00:07:17.569 13812.972 - 13913.797: 88.9088% ( 24) 00:07:17.569 13913.797 - 14014.622: 89.0796% ( 20) 00:07:17.569 14014.622 - 14115.446: 89.3613% ( 33) 00:07:17.569 14115.446 - 14216.271: 89.9419% ( 68) 00:07:17.569 14216.271 - 14317.095: 90.6250% ( 80) 00:07:17.569 14317.095 - 14417.920: 91.0519% ( 50) 00:07:17.569 14417.920 - 14518.745: 91.3849% ( 39) 00:07:17.569 14518.745 - 14619.569: 91.7862% ( 47) 00:07:17.569 14619.569 - 14720.394: 92.2302% ( 52) 00:07:17.569 14720.394 - 14821.218: 92.6486% ( 49) 00:07:17.569 14821.218 - 14922.043: 92.9730% ( 38) 00:07:17.569 14922.043 - 15022.868: 93.4939% ( 61) 00:07:17.569 15022.868 - 15123.692: 93.7842% ( 34) 00:07:17.569 15123.692 - 15224.517: 94.1513% ( 43) 00:07:17.569 15224.517 - 15325.342: 94.5355% ( 45) 00:07:17.569 15325.342 - 15426.166: 94.7661% ( 27) 00:07:17.569 15426.166 - 15526.991: 94.9368% ( 20) 00:07:17.569 15526.991 - 15627.815: 95.3893% ( 53) 00:07:17.569 15627.815 - 15728.640: 95.7223% ( 39) 00:07:17.569 15728.640 - 15829.465: 95.8675% ( 17) 00:07:17.569 15829.465 - 15930.289: 96.0041% ( 16) 00:07:17.569 15930.289 - 16031.114: 96.1663% ( 19) 00:07:17.569 16031.114 - 16131.938: 96.3542% ( 22) 00:07:17.569 16131.938 - 16232.763: 96.5335% ( 21) 00:07:17.569 16232.763 - 16333.588: 96.9262% ( 46) 00:07:17.569 16333.588 - 16434.412: 97.3702% ( 52) 00:07:17.569 16434.412 - 16535.237: 97.5666% ( 23) 00:07:17.569 16535.237 - 16636.062: 97.7288% ( 19) 00:07:17.569 16636.062 - 16736.886: 97.8569% ( 15) 00:07:17.569 16736.886 - 16837.711: 98.0106% ( 18) 00:07:17.569 16837.711 - 16938.535: 98.1557% ( 17) 00:07:17.569 16938.535 - 17039.360: 98.3265% ( 20) 00:07:17.569 17039.360 - 17140.185: 98.5400% ( 25) 00:07:17.569 17140.185 - 17241.009: 98.7363% ( 23) 00:07:17.569 17241.009 - 17341.834: 98.9156% ( 21) 00:07:17.569 17341.834 - 17442.658: 98.9925% ( 9) 00:07:17.569 17442.658 - 17543.483: 99.0608% ( 8) 00:07:17.569 17543.483 - 17644.308: 99.1291% ( 8) 00:07:17.569 17644.308 - 17745.132: 99.2230% ( 11) 00:07:17.569 17745.132 - 17845.957: 99.2999% ( 9) 00:07:17.569 17845.957 - 17946.782: 99.3682% ( 8) 00:07:17.569 17946.782 - 18047.606: 99.4109% ( 5) 00:07:17.569 18047.606 - 18148.431: 99.4536% ( 5) 00:07:17.569 23794.609 - 23895.434: 99.4621% ( 1) 00:07:17.569 23895.434 - 23996.258: 99.4962% ( 4) 00:07:17.569 23996.258 - 24097.083: 99.5389% ( 5) 00:07:17.569 24097.083 - 24197.908: 99.5731% ( 4) 00:07:17.569 24197.908 - 24298.732: 99.6072% ( 4) 00:07:17.569 24298.732 - 24399.557: 99.6414% ( 4) 00:07:17.569 24399.557 - 24500.382: 99.6755% ( 4) 00:07:17.569 24500.382 - 24601.206: 99.7182% ( 5) 00:07:17.569 24601.206 - 24702.031: 99.7609% ( 5) 00:07:17.569 24702.031 - 24802.855: 99.7951% ( 4) 00:07:17.569 24802.855 - 24903.680: 99.8292% ( 4) 00:07:17.569 24903.680 - 25004.505: 99.8719% ( 5) 00:07:17.569 25004.505 - 25105.329: 99.9146% ( 5) 00:07:17.569 25105.329 - 25206.154: 99.9488% ( 4) 00:07:17.569 25206.154 - 25306.978: 99.9915% ( 5) 00:07:17.569 25306.978 - 25407.803: 100.0000% ( 1) 00:07:17.569 00:07:17.569 ************************************ 00:07:17.569 END TEST nvme_perf 00:07:17.569 ************************************ 00:07:17.569 02:54:33 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:17.569 00:07:17.569 real 0m2.421s 00:07:17.569 user 0m2.166s 00:07:17.569 sys 0m0.154s 00:07:17.569 02:54:33 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.569 02:54:33 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:17.828 02:54:33 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:17.828 02:54:33 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:17.828 02:54:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.828 02:54:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.828 ************************************ 00:07:17.828 START TEST nvme_hello_world 00:07:17.828 ************************************ 00:07:17.828 02:54:33 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:17.828 Initializing NVMe Controllers 00:07:17.828 Attached to 0000:00:10.0 00:07:17.828 Namespace ID: 1 size: 6GB 00:07:17.828 Attached to 0000:00:11.0 00:07:17.829 Namespace ID: 1 size: 5GB 00:07:17.829 Attached to 0000:00:13.0 00:07:17.829 Namespace ID: 1 size: 1GB 00:07:17.829 Attached to 0000:00:12.0 00:07:17.829 Namespace ID: 1 size: 4GB 00:07:17.829 Namespace ID: 2 size: 4GB 00:07:17.829 Namespace ID: 3 size: 4GB 00:07:17.829 Initialization complete. 00:07:17.829 INFO: using host memory buffer for IO 00:07:17.829 Hello world! 00:07:17.829 INFO: using host memory buffer for IO 00:07:17.829 Hello world! 00:07:17.829 INFO: using host memory buffer for IO 00:07:17.829 Hello world! 00:07:17.829 INFO: using host memory buffer for IO 00:07:17.829 Hello world! 00:07:17.829 INFO: using host memory buffer for IO 00:07:17.829 Hello world! 00:07:17.829 INFO: using host memory buffer for IO 00:07:17.829 Hello world! 00:07:17.829 00:07:17.829 real 0m0.226s 00:07:17.829 user 0m0.074s 00:07:17.829 sys 0m0.096s 00:07:17.829 02:54:33 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.829 ************************************ 00:07:17.829 END TEST nvme_hello_world 00:07:17.829 ************************************ 00:07:17.829 02:54:33 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:18.089 02:54:33 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:18.089 02:54:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.089 02:54:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.089 02:54:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:18.089 ************************************ 00:07:18.089 START TEST nvme_sgl 00:07:18.089 ************************************ 00:07:18.089 02:54:33 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:18.089 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:18.089 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:18.089 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:18.089 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:18.089 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:18.089 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:18.089 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:18.089 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:18.089 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:18.349 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:18.349 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:18.349 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:18.349 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:18.349 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:18.349 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:18.349 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:18.349 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:18.349 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:18.349 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:18.349 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:18.349 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:18.349 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:18.349 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:18.349 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:18.349 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:18.349 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:18.349 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:18.349 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:18.349 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:18.349 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:18.349 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:18.349 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:18.349 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:18.349 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:18.349 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:18.349 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:18.349 NVMe Readv/Writev Request test 00:07:18.349 Attached to 0000:00:10.0 00:07:18.349 Attached to 0000:00:11.0 00:07:18.349 Attached to 0000:00:13.0 00:07:18.349 Attached to 0000:00:12.0 00:07:18.349 0000:00:10.0: build_io_request_2 test passed 00:07:18.349 0000:00:10.0: build_io_request_4 test passed 00:07:18.349 0000:00:10.0: build_io_request_5 test passed 00:07:18.349 0000:00:10.0: build_io_request_6 test passed 00:07:18.349 0000:00:10.0: build_io_request_7 test passed 00:07:18.349 0000:00:10.0: build_io_request_10 test passed 00:07:18.349 0000:00:11.0: build_io_request_2 test passed 00:07:18.349 0000:00:11.0: build_io_request_4 test passed 00:07:18.349 0000:00:11.0: build_io_request_5 test passed 00:07:18.349 0000:00:11.0: build_io_request_6 test passed 00:07:18.349 0000:00:11.0: build_io_request_7 test passed 00:07:18.349 0000:00:11.0: build_io_request_10 test passed 00:07:18.349 Cleaning up... 00:07:18.349 00:07:18.349 real 0m0.260s 00:07:18.349 user 0m0.133s 00:07:18.349 sys 0m0.083s 00:07:18.349 02:54:34 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.349 ************************************ 00:07:18.350 END TEST nvme_sgl 00:07:18.350 ************************************ 00:07:18.350 02:54:34 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:18.350 02:54:34 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:18.350 02:54:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.350 02:54:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.350 02:54:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:18.350 ************************************ 00:07:18.350 START TEST nvme_e2edp 00:07:18.350 ************************************ 00:07:18.350 02:54:34 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:18.611 NVMe Write/Read with End-to-End data protection test 00:07:18.611 Attached to 0000:00:10.0 00:07:18.611 Attached to 0000:00:11.0 00:07:18.611 Attached to 0000:00:13.0 00:07:18.611 Attached to 0000:00:12.0 00:07:18.611 Cleaning up... 00:07:18.611 00:07:18.611 real 0m0.184s 00:07:18.611 user 0m0.068s 00:07:18.611 sys 0m0.068s 00:07:18.611 ************************************ 00:07:18.611 END TEST nvme_e2edp 00:07:18.611 ************************************ 00:07:18.611 02:54:34 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.611 02:54:34 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:18.611 02:54:34 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:18.611 02:54:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.611 02:54:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.611 02:54:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:18.611 ************************************ 00:07:18.611 START TEST nvme_reserve 00:07:18.611 ************************************ 00:07:18.611 02:54:34 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:18.611 ===================================================== 00:07:18.611 NVMe Controller at PCI bus 0, device 16, function 0 00:07:18.611 ===================================================== 00:07:18.611 Reservations: Not Supported 00:07:18.611 ===================================================== 00:07:18.611 NVMe Controller at PCI bus 0, device 17, function 0 00:07:18.611 ===================================================== 00:07:18.611 Reservations: Not Supported 00:07:18.611 ===================================================== 00:07:18.611 NVMe Controller at PCI bus 0, device 19, function 0 00:07:18.611 ===================================================== 00:07:18.611 Reservations: Not Supported 00:07:18.611 ===================================================== 00:07:18.611 NVMe Controller at PCI bus 0, device 18, function 0 00:07:18.611 ===================================================== 00:07:18.611 Reservations: Not Supported 00:07:18.611 Reservation test passed 00:07:18.872 ************************************ 00:07:18.872 END TEST nvme_reserve 00:07:18.872 ************************************ 00:07:18.872 00:07:18.872 real 0m0.178s 00:07:18.872 user 0m0.062s 00:07:18.872 sys 0m0.073s 00:07:18.872 02:54:34 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.872 02:54:34 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:18.872 02:54:34 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:18.872 02:54:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.872 02:54:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.872 02:54:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:18.872 ************************************ 00:07:18.872 START TEST nvme_err_injection 00:07:18.872 ************************************ 00:07:18.872 02:54:34 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:19.133 NVMe Error Injection test 00:07:19.133 Attached to 0000:00:10.0 00:07:19.133 Attached to 0000:00:11.0 00:07:19.133 Attached to 0000:00:13.0 00:07:19.133 Attached to 0000:00:12.0 00:07:19.133 0000:00:10.0: get features failed as expected 00:07:19.133 0000:00:11.0: get features failed as expected 00:07:19.133 0000:00:13.0: get features failed as expected 00:07:19.133 0000:00:12.0: get features failed as expected 00:07:19.133 0000:00:11.0: get features successfully as expected 00:07:19.133 0000:00:13.0: get features successfully as expected 00:07:19.133 0000:00:12.0: get features successfully as expected 00:07:19.133 0000:00:10.0: get features successfully as expected 00:07:19.133 0000:00:10.0: read failed as expected 00:07:19.133 0000:00:11.0: read failed as expected 00:07:19.133 0000:00:13.0: read failed as expected 00:07:19.133 0000:00:12.0: read failed as expected 00:07:19.133 0000:00:12.0: read successfully as expected 00:07:19.133 0000:00:10.0: read successfully as expected 00:07:19.133 0000:00:11.0: read successfully as expected 00:07:19.133 0000:00:13.0: read successfully as expected 00:07:19.133 Cleaning up... 00:07:19.133 ************************************ 00:07:19.133 END TEST nvme_err_injection 00:07:19.133 ************************************ 00:07:19.133 00:07:19.133 real 0m0.209s 00:07:19.133 user 0m0.073s 00:07:19.133 sys 0m0.081s 00:07:19.133 02:54:34 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.133 02:54:34 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:19.133 02:54:34 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:19.133 02:54:34 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:19.133 02:54:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.133 02:54:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:19.133 ************************************ 00:07:19.133 START TEST nvme_overhead 00:07:19.133 ************************************ 00:07:19.133 02:54:34 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:20.527 Initializing NVMe Controllers 00:07:20.527 Attached to 0000:00:10.0 00:07:20.527 Attached to 0000:00:11.0 00:07:20.527 Attached to 0000:00:13.0 00:07:20.527 Attached to 0000:00:12.0 00:07:20.527 Initialization complete. Launching workers. 00:07:20.527 submit (in ns) avg, min, max = 9876.5, 7995.4, 107569.2 00:07:20.527 complete (in ns) avg, min, max = 6496.4, 5673.1, 357201.5 00:07:20.527 00:07:20.527 Submit histogram 00:07:20.527 ================ 00:07:20.527 Range in us Cumulative Count 00:07:20.527 7.975 - 8.025: 0.0083% ( 1) 00:07:20.527 8.025 - 8.074: 0.0167% ( 1) 00:07:20.527 8.271 - 8.320: 0.0250% ( 1) 00:07:20.527 8.320 - 8.369: 0.0334% ( 1) 00:07:20.527 8.615 - 8.665: 0.0417% ( 1) 00:07:20.527 8.665 - 8.714: 0.1501% ( 13) 00:07:20.527 8.714 - 8.763: 0.4837% ( 40) 00:07:20.527 8.763 - 8.812: 1.6514% ( 140) 00:07:20.527 8.812 - 8.862: 4.5788% ( 351) 00:07:20.527 8.862 - 8.911: 10.5004% ( 710) 00:07:20.527 8.911 - 8.960: 20.1001% ( 1151) 00:07:20.527 8.960 - 9.009: 31.4512% ( 1361) 00:07:20.527 9.009 - 9.058: 42.9191% ( 1375) 00:07:20.527 9.058 - 9.108: 52.2519% ( 1119) 00:07:20.527 9.108 - 9.157: 58.0734% ( 698) 00:07:20.527 9.157 - 9.206: 61.9516% ( 465) 00:07:20.527 9.206 - 9.255: 64.5371% ( 310) 00:07:20.527 9.255 - 9.305: 66.3803% ( 221) 00:07:20.527 9.305 - 9.354: 68.0484% ( 200) 00:07:20.527 9.354 - 9.403: 69.4746% ( 171) 00:07:20.527 9.403 - 9.452: 70.5838% ( 133) 00:07:20.527 9.452 - 9.502: 71.4595% ( 105) 00:07:20.527 9.502 - 9.551: 72.2519% ( 95) 00:07:20.527 9.551 - 9.600: 72.8691% ( 74) 00:07:20.527 9.600 - 9.649: 73.3528% ( 58) 00:07:20.527 9.649 - 9.698: 73.7698% ( 50) 00:07:20.527 9.698 - 9.748: 74.2702% ( 60) 00:07:20.527 9.748 - 9.797: 74.6872% ( 50) 00:07:20.527 9.797 - 9.846: 75.2377% ( 66) 00:07:20.527 9.846 - 9.895: 75.8382% ( 72) 00:07:20.527 9.895 - 9.945: 76.3136% ( 57) 00:07:20.527 9.945 - 9.994: 76.9558% ( 77) 00:07:20.527 9.994 - 10.043: 77.5563% ( 72) 00:07:20.527 10.043 - 10.092: 78.1401% ( 70) 00:07:20.527 10.092 - 10.142: 78.7073% ( 68) 00:07:20.527 10.142 - 10.191: 78.9741% ( 32) 00:07:20.527 10.191 - 10.240: 79.2244% ( 30) 00:07:20.527 10.240 - 10.289: 79.3745% ( 18) 00:07:20.527 10.289 - 10.338: 79.5329% ( 19) 00:07:20.527 10.338 - 10.388: 79.6163% ( 10) 00:07:20.527 10.388 - 10.437: 79.6747% ( 7) 00:07:20.527 10.437 - 10.486: 79.7581% ( 10) 00:07:20.527 10.486 - 10.535: 79.7915% ( 4) 00:07:20.527 10.535 - 10.585: 79.8832% ( 11) 00:07:20.527 10.585 - 10.634: 80.0250% ( 17) 00:07:20.527 10.634 - 10.683: 80.1501% ( 15) 00:07:20.527 10.683 - 10.732: 80.2252% ( 9) 00:07:20.527 10.732 - 10.782: 80.2585% ( 4) 00:07:20.527 10.782 - 10.831: 80.3670% ( 13) 00:07:20.527 10.831 - 10.880: 80.5171% ( 18) 00:07:20.527 10.880 - 10.929: 80.7006% ( 22) 00:07:20.527 10.929 - 10.978: 80.9758% ( 33) 00:07:20.527 10.978 - 11.028: 81.2761% ( 36) 00:07:20.527 11.028 - 11.077: 81.7014% ( 51) 00:07:20.527 11.077 - 11.126: 82.1268% ( 51) 00:07:20.527 11.126 - 11.175: 82.8691% ( 89) 00:07:20.527 11.175 - 11.225: 83.5863% ( 86) 00:07:20.527 11.225 - 11.274: 84.3620% ( 93) 00:07:20.527 11.274 - 11.323: 85.2794% ( 110) 00:07:20.527 11.323 - 11.372: 86.0884% ( 97) 00:07:20.527 11.372 - 11.422: 87.1810% ( 131) 00:07:20.527 11.422 - 11.471: 88.1651% ( 118) 00:07:20.527 11.471 - 11.520: 88.9158% ( 90) 00:07:20.527 11.520 - 11.569: 89.5830% ( 80) 00:07:20.527 11.569 - 11.618: 90.2419% ( 79) 00:07:20.527 11.618 - 11.668: 90.9008% ( 79) 00:07:20.527 11.668 - 11.717: 91.3761% ( 57) 00:07:20.527 11.717 - 11.766: 91.7932% ( 50) 00:07:20.527 11.766 - 11.815: 92.2769% ( 58) 00:07:20.527 11.815 - 11.865: 92.6105% ( 40) 00:07:20.527 11.865 - 11.914: 92.9608% ( 42) 00:07:20.527 11.914 - 11.963: 93.2694% ( 37) 00:07:20.527 11.963 - 12.012: 93.5113% ( 29) 00:07:20.527 12.012 - 12.062: 93.7281% ( 26) 00:07:20.527 12.062 - 12.111: 93.9366% ( 25) 00:07:20.527 12.111 - 12.160: 94.1952% ( 31) 00:07:20.527 12.160 - 12.209: 94.4370% ( 29) 00:07:20.527 12.209 - 12.258: 94.6289% ( 23) 00:07:20.527 12.258 - 12.308: 94.8040% ( 21) 00:07:20.527 12.308 - 12.357: 94.9124% ( 13) 00:07:20.527 12.357 - 12.406: 95.1376% ( 27) 00:07:20.527 12.406 - 12.455: 95.2877% ( 18) 00:07:20.527 12.455 - 12.505: 95.3378% ( 6) 00:07:20.527 12.505 - 12.554: 95.4045% ( 8) 00:07:20.527 12.554 - 12.603: 95.4629% ( 7) 00:07:20.527 12.603 - 12.702: 95.5796% ( 14) 00:07:20.527 12.702 - 12.800: 95.6464% ( 8) 00:07:20.527 12.800 - 12.898: 95.7631% ( 14) 00:07:20.527 12.898 - 12.997: 95.8132% ( 6) 00:07:20.527 12.997 - 13.095: 95.8215% ( 1) 00:07:20.527 13.095 - 13.194: 95.8716% ( 6) 00:07:20.527 13.194 - 13.292: 95.9716% ( 12) 00:07:20.527 13.292 - 13.391: 95.9883% ( 2) 00:07:20.527 13.391 - 13.489: 96.0467% ( 7) 00:07:20.527 13.489 - 13.588: 96.1384% ( 11) 00:07:20.527 13.588 - 13.686: 96.2219% ( 10) 00:07:20.527 13.686 - 13.785: 96.2969% ( 9) 00:07:20.527 13.785 - 13.883: 96.3970% ( 12) 00:07:20.527 13.883 - 13.982: 96.4637% ( 8) 00:07:20.527 13.982 - 14.080: 96.5388% ( 9) 00:07:20.527 14.080 - 14.178: 96.6138% ( 9) 00:07:20.527 14.178 - 14.277: 96.7139% ( 12) 00:07:20.527 14.277 - 14.375: 96.7973% ( 10) 00:07:20.527 14.375 - 14.474: 96.8807% ( 10) 00:07:20.527 14.474 - 14.572: 96.9725% ( 11) 00:07:20.527 14.572 - 14.671: 97.0309% ( 7) 00:07:20.527 14.671 - 14.769: 97.1143% ( 10) 00:07:20.527 14.769 - 14.868: 97.2143% ( 12) 00:07:20.527 14.868 - 14.966: 97.3228% ( 13) 00:07:20.527 14.966 - 15.065: 97.3895% ( 8) 00:07:20.527 15.065 - 15.163: 97.4812% ( 11) 00:07:20.527 15.163 - 15.262: 97.5646% ( 10) 00:07:20.527 15.262 - 15.360: 97.6647% ( 12) 00:07:20.527 15.360 - 15.458: 97.8232% ( 19) 00:07:20.527 15.458 - 15.557: 97.9399% ( 14) 00:07:20.527 15.557 - 15.655: 98.0234% ( 10) 00:07:20.527 15.655 - 15.754: 98.1318% ( 13) 00:07:20.527 15.754 - 15.852: 98.2569% ( 15) 00:07:20.527 15.852 - 15.951: 98.3319% ( 9) 00:07:20.527 15.951 - 16.049: 98.4487% ( 14) 00:07:20.527 16.049 - 16.148: 98.5154% ( 8) 00:07:20.527 16.148 - 16.246: 98.5988% ( 10) 00:07:20.527 16.246 - 16.345: 98.6405% ( 5) 00:07:20.527 16.345 - 16.443: 98.7323% ( 11) 00:07:20.527 16.443 - 16.542: 98.7656% ( 4) 00:07:20.527 16.542 - 16.640: 98.8157% ( 6) 00:07:20.527 16.640 - 16.738: 98.8574% ( 5) 00:07:20.527 16.738 - 16.837: 98.8991% ( 5) 00:07:20.527 16.837 - 16.935: 98.9408% ( 5) 00:07:20.527 16.935 - 17.034: 98.9741% ( 4) 00:07:20.527 17.034 - 17.132: 98.9992% ( 3) 00:07:20.527 17.132 - 17.231: 99.0409% ( 5) 00:07:20.527 17.231 - 17.329: 99.0742% ( 4) 00:07:20.527 17.329 - 17.428: 99.1243% ( 6) 00:07:20.527 17.428 - 17.526: 99.1576% ( 4) 00:07:20.527 17.526 - 17.625: 99.1910% ( 4) 00:07:20.527 17.625 - 17.723: 99.2160% ( 3) 00:07:20.527 17.723 - 17.822: 99.2410% ( 3) 00:07:20.527 17.822 - 17.920: 99.2661% ( 3) 00:07:20.527 17.920 - 18.018: 99.2827% ( 2) 00:07:20.527 18.018 - 18.117: 99.3078% ( 3) 00:07:20.527 18.117 - 18.215: 99.3328% ( 3) 00:07:20.527 18.215 - 18.314: 99.3495% ( 2) 00:07:20.527 18.314 - 18.412: 99.3745% ( 3) 00:07:20.527 18.412 - 18.511: 99.3828% ( 1) 00:07:20.527 18.511 - 18.609: 99.3912% ( 1) 00:07:20.527 18.609 - 18.708: 99.3995% ( 1) 00:07:20.527 18.708 - 18.806: 99.4245% ( 3) 00:07:20.527 18.806 - 18.905: 99.4412% ( 2) 00:07:20.527 19.003 - 19.102: 99.4495% ( 1) 00:07:20.527 19.495 - 19.594: 99.4579% ( 1) 00:07:20.527 19.594 - 19.692: 99.4746% ( 2) 00:07:20.527 19.791 - 19.889: 99.4829% ( 1) 00:07:20.527 19.988 - 20.086: 99.4912% ( 1) 00:07:20.527 20.382 - 20.480: 99.4996% ( 1) 00:07:20.527 20.578 - 20.677: 99.5079% ( 1) 00:07:20.528 20.677 - 20.775: 99.5163% ( 1) 00:07:20.528 20.775 - 20.874: 99.5329% ( 2) 00:07:20.528 20.874 - 20.972: 99.5413% ( 1) 00:07:20.528 21.662 - 21.760: 99.5580% ( 2) 00:07:20.528 21.957 - 22.055: 99.5663% ( 1) 00:07:20.528 22.252 - 22.351: 99.5746% ( 1) 00:07:20.528 22.843 - 22.942: 99.5830% ( 1) 00:07:20.528 23.138 - 23.237: 99.5913% ( 1) 00:07:20.528 23.434 - 23.532: 99.5997% ( 1) 00:07:20.528 23.729 - 23.828: 99.6080% ( 1) 00:07:20.528 24.615 - 24.714: 99.6163% ( 1) 00:07:20.528 24.911 - 25.009: 99.6247% ( 1) 00:07:20.528 25.108 - 25.206: 99.6330% ( 1) 00:07:20.528 25.206 - 25.403: 99.6414% ( 1) 00:07:20.528 26.388 - 26.585: 99.6497% ( 1) 00:07:20.528 26.585 - 26.782: 99.6580% ( 1) 00:07:20.528 27.569 - 27.766: 99.6747% ( 2) 00:07:20.528 27.766 - 27.963: 99.7331% ( 7) 00:07:20.528 27.963 - 28.160: 99.7581% ( 3) 00:07:20.528 28.160 - 28.357: 99.7915% ( 4) 00:07:20.528 28.357 - 28.554: 99.8082% ( 2) 00:07:20.528 28.554 - 28.751: 99.8165% ( 1) 00:07:20.528 28.751 - 28.948: 99.8249% ( 1) 00:07:20.528 28.948 - 29.145: 99.8415% ( 2) 00:07:20.528 29.932 - 30.129: 99.8499% ( 1) 00:07:20.528 30.129 - 30.326: 99.8666% ( 2) 00:07:20.528 30.917 - 31.114: 99.8749% ( 1) 00:07:20.528 31.705 - 31.902: 99.8832% ( 1) 00:07:20.528 32.689 - 32.886: 99.8916% ( 1) 00:07:20.528 36.234 - 36.431: 99.8999% ( 1) 00:07:20.528 42.142 - 42.338: 99.9166% ( 2) 00:07:20.528 43.914 - 44.111: 99.9249% ( 1) 00:07:20.528 50.806 - 51.200: 99.9333% ( 1) 00:07:20.528 54.351 - 54.745: 99.9416% ( 1) 00:07:20.528 55.532 - 55.926: 99.9583% ( 2) 00:07:20.528 59.077 - 59.471: 99.9666% ( 1) 00:07:20.528 82.708 - 83.102: 99.9750% ( 1) 00:07:20.528 93.342 - 93.735: 99.9833% ( 1) 00:07:20.528 104.763 - 105.551: 99.9917% ( 1) 00:07:20.528 107.126 - 107.914: 100.0000% ( 1) 00:07:20.528 00:07:20.528 Complete histogram 00:07:20.528 ================== 00:07:20.528 Range in us Cumulative Count 00:07:20.528 5.662 - 5.686: 0.0167% ( 2) 00:07:20.528 5.686 - 5.711: 0.1418% ( 15) 00:07:20.528 5.711 - 5.735: 0.5588% ( 50) 00:07:20.528 5.735 - 5.760: 2.3186% ( 211) 00:07:20.528 5.760 - 5.785: 7.4145% ( 611) 00:07:20.528 5.785 - 5.809: 17.1309% ( 1165) 00:07:20.528 5.809 - 5.834: 28.8490% ( 1405) 00:07:20.528 5.834 - 5.858: 39.6330% ( 1293) 00:07:20.528 5.858 - 5.883: 49.7331% ( 1211) 00:07:20.528 5.883 - 5.908: 57.1810% ( 893) 00:07:20.528 5.908 - 5.932: 62.9024% ( 686) 00:07:20.528 5.932 - 5.957: 66.8724% ( 476) 00:07:20.528 5.957 - 5.982: 69.7081% ( 340) 00:07:20.528 5.982 - 6.006: 72.1268% ( 290) 00:07:20.528 6.006 - 6.031: 73.8866% ( 211) 00:07:20.528 6.031 - 6.055: 75.3044% ( 170) 00:07:20.528 6.055 - 6.080: 76.3720% ( 128) 00:07:20.528 6.080 - 6.105: 77.3228% ( 114) 00:07:20.528 6.105 - 6.129: 77.9566% ( 76) 00:07:20.528 6.129 - 6.154: 78.4070% ( 54) 00:07:20.528 6.154 - 6.178: 78.8073% ( 48) 00:07:20.528 6.178 - 6.203: 79.0575% ( 30) 00:07:20.528 6.203 - 6.228: 79.2911% ( 28) 00:07:20.528 6.228 - 6.252: 79.4996% ( 25) 00:07:20.528 6.252 - 6.277: 79.5496% ( 6) 00:07:20.528 6.277 - 6.302: 79.6163% ( 8) 00:07:20.528 6.302 - 6.351: 79.7248% ( 13) 00:07:20.528 6.351 - 6.400: 79.7415% ( 2) 00:07:20.528 6.400 - 6.449: 79.8332% ( 11) 00:07:20.528 6.449 - 6.498: 80.0000% ( 20) 00:07:20.528 6.498 - 6.548: 80.2085% ( 25) 00:07:20.528 6.548 - 6.597: 80.4003% ( 23) 00:07:20.528 6.597 - 6.646: 80.5838% ( 22) 00:07:20.528 6.646 - 6.695: 80.8340% ( 30) 00:07:20.528 6.695 - 6.745: 81.0259% ( 23) 00:07:20.528 6.745 - 6.794: 81.1259% ( 12) 00:07:20.528 6.794 - 6.843: 81.2010% ( 9) 00:07:20.528 6.843 - 6.892: 81.2594% ( 7) 00:07:20.528 6.892 - 6.942: 81.2844% ( 3) 00:07:20.528 6.942 - 6.991: 81.3178% ( 4) 00:07:20.528 6.991 - 7.040: 81.3511% ( 4) 00:07:20.528 7.040 - 7.089: 81.3678% ( 2) 00:07:20.528 7.089 - 7.138: 81.3761% ( 1) 00:07:20.528 7.138 - 7.188: 81.4012% ( 3) 00:07:20.528 7.188 - 7.237: 81.4178% ( 2) 00:07:20.528 7.237 - 7.286: 81.6180% ( 24) 00:07:20.528 7.286 - 7.335: 81.9099% ( 35) 00:07:20.528 7.335 - 7.385: 82.2185% ( 37) 00:07:20.528 7.385 - 7.434: 82.6272% ( 49) 00:07:20.528 7.434 - 7.483: 83.3611% ( 88) 00:07:20.528 7.483 - 7.532: 84.6372% ( 153) 00:07:20.528 7.532 - 7.582: 86.4053% ( 212) 00:07:20.528 7.582 - 7.631: 88.5154% ( 253) 00:07:20.528 7.631 - 7.680: 90.3670% ( 222) 00:07:20.528 7.680 - 7.729: 91.7848% ( 170) 00:07:20.528 7.729 - 7.778: 93.0609% ( 153) 00:07:20.528 7.778 - 7.828: 93.9616% ( 108) 00:07:20.528 7.828 - 7.877: 94.6539% ( 83) 00:07:20.528 7.877 - 7.926: 94.9791% ( 39) 00:07:20.528 7.926 - 7.975: 95.1710% ( 23) 00:07:20.528 7.975 - 8.025: 95.2961% ( 15) 00:07:20.528 8.025 - 8.074: 95.3878% ( 11) 00:07:20.528 8.074 - 8.123: 95.4712% ( 10) 00:07:20.528 8.123 - 8.172: 95.5046% ( 4) 00:07:20.528 8.172 - 8.222: 95.5546% ( 6) 00:07:20.528 8.222 - 8.271: 95.6380% ( 10) 00:07:20.528 8.271 - 8.320: 95.7965% ( 19) 00:07:20.528 8.320 - 8.369: 95.8799% ( 10) 00:07:20.528 8.369 - 8.418: 96.0217% ( 17) 00:07:20.528 8.418 - 8.468: 96.1635% ( 17) 00:07:20.528 8.468 - 8.517: 96.3053% ( 17) 00:07:20.528 8.517 - 8.566: 96.4387% ( 16) 00:07:20.528 8.566 - 8.615: 96.5471% ( 13) 00:07:20.528 8.615 - 8.665: 96.6305% ( 10) 00:07:20.528 8.665 - 8.714: 96.6722% ( 5) 00:07:20.528 8.714 - 8.763: 96.7139% ( 5) 00:07:20.528 8.763 - 8.812: 96.7807% ( 8) 00:07:20.528 8.812 - 8.862: 96.8057% ( 3) 00:07:20.528 8.862 - 8.911: 96.8474% ( 5) 00:07:20.528 8.911 - 8.960: 96.8974% ( 6) 00:07:20.528 8.960 - 9.009: 96.9224% ( 3) 00:07:20.528 9.009 - 9.058: 96.9558% ( 4) 00:07:20.528 9.058 - 9.108: 96.9808% ( 3) 00:07:20.528 9.108 - 9.157: 97.0058% ( 3) 00:07:20.528 9.157 - 9.206: 97.0142% ( 1) 00:07:20.528 9.206 - 9.255: 97.0309% ( 2) 00:07:20.528 9.255 - 9.305: 97.0559% ( 3) 00:07:20.528 9.305 - 9.354: 97.0726% ( 2) 00:07:20.528 9.354 - 9.403: 97.0892% ( 2) 00:07:20.528 9.403 - 9.452: 97.0976% ( 1) 00:07:20.528 9.551 - 9.600: 97.1059% ( 1) 00:07:20.528 9.649 - 9.698: 97.1143% ( 1) 00:07:20.528 9.748 - 9.797: 97.1226% ( 1) 00:07:20.528 9.797 - 9.846: 97.1309% ( 1) 00:07:20.528 9.945 - 9.994: 97.1560% ( 3) 00:07:20.528 9.994 - 10.043: 97.1726% ( 2) 00:07:20.528 10.043 - 10.092: 97.1810% ( 1) 00:07:20.528 10.092 - 10.142: 97.1977% ( 2) 00:07:20.528 10.142 - 10.191: 97.2060% ( 1) 00:07:20.528 10.240 - 10.289: 97.2727% ( 8) 00:07:20.528 10.289 - 10.338: 97.3394% ( 8) 00:07:20.528 10.338 - 10.388: 97.4062% ( 8) 00:07:20.528 10.388 - 10.437: 97.4312% ( 3) 00:07:20.528 10.437 - 10.486: 97.4729% ( 5) 00:07:20.528 10.486 - 10.535: 97.5396% ( 8) 00:07:20.528 10.535 - 10.585: 97.5730% ( 4) 00:07:20.528 10.585 - 10.634: 97.6731% ( 12) 00:07:20.528 10.634 - 10.683: 97.7231% ( 6) 00:07:20.528 10.683 - 10.732: 97.7648% ( 5) 00:07:20.528 10.732 - 10.782: 97.7982% ( 4) 00:07:20.528 10.782 - 10.831: 97.8982% ( 12) 00:07:20.528 10.831 - 10.880: 97.9483% ( 6) 00:07:20.528 10.880 - 10.929: 97.9900% ( 5) 00:07:20.528 10.929 - 10.978: 98.0567% ( 8) 00:07:20.528 10.978 - 11.028: 98.1318% ( 9) 00:07:20.528 11.028 - 11.077: 98.1735% ( 5) 00:07:20.528 11.077 - 11.126: 98.2736% ( 12) 00:07:20.528 11.126 - 11.175: 98.3403% ( 8) 00:07:20.528 11.175 - 11.225: 98.4070% ( 8) 00:07:20.528 11.225 - 11.274: 98.4487% ( 5) 00:07:20.528 11.274 - 11.323: 98.4987% ( 6) 00:07:20.528 11.323 - 11.372: 98.5238% ( 3) 00:07:20.528 11.372 - 11.422: 98.5488% ( 3) 00:07:20.528 11.422 - 11.471: 98.5738% ( 3) 00:07:20.528 11.471 - 11.520: 98.6072% ( 4) 00:07:20.528 11.520 - 11.569: 98.6489% ( 5) 00:07:20.528 11.569 - 11.618: 98.6656% ( 2) 00:07:20.528 11.618 - 11.668: 98.6906% ( 3) 00:07:20.528 11.668 - 11.717: 98.7406% ( 6) 00:07:20.528 11.717 - 11.766: 98.7490% ( 1) 00:07:20.528 11.766 - 11.815: 98.7823% ( 4) 00:07:20.528 11.815 - 11.865: 98.7907% ( 1) 00:07:20.528 11.865 - 11.914: 98.8157% ( 3) 00:07:20.528 11.963 - 12.012: 98.8240% ( 1) 00:07:20.528 12.062 - 12.111: 98.8324% ( 1) 00:07:20.528 12.160 - 12.209: 98.8407% ( 1) 00:07:20.529 12.357 - 12.406: 98.8490% ( 1) 00:07:20.529 12.554 - 12.603: 98.8574% ( 1) 00:07:20.529 12.603 - 12.702: 98.8657% ( 1) 00:07:20.529 12.702 - 12.800: 98.8741% ( 1) 00:07:20.529 12.800 - 12.898: 98.8907% ( 2) 00:07:20.529 12.898 - 12.997: 98.8991% ( 1) 00:07:20.529 12.997 - 13.095: 98.9241% ( 3) 00:07:20.529 13.194 - 13.292: 98.9491% ( 3) 00:07:20.529 13.391 - 13.489: 98.9908% ( 5) 00:07:20.529 13.489 - 13.588: 99.0158% ( 3) 00:07:20.529 13.588 - 13.686: 99.0325% ( 2) 00:07:20.529 13.686 - 13.785: 99.0492% ( 2) 00:07:20.529 13.785 - 13.883: 99.0909% ( 5) 00:07:20.529 13.883 - 13.982: 99.1076% ( 2) 00:07:20.529 13.982 - 14.080: 99.1243% ( 2) 00:07:20.529 14.080 - 14.178: 99.1576% ( 4) 00:07:20.529 14.178 - 14.277: 99.1827% ( 3) 00:07:20.529 14.277 - 14.375: 99.1910% ( 1) 00:07:20.529 14.375 - 14.474: 99.2244% ( 4) 00:07:20.529 14.474 - 14.572: 99.2410% ( 2) 00:07:20.529 14.572 - 14.671: 99.2744% ( 4) 00:07:20.529 14.671 - 14.769: 99.2994% ( 3) 00:07:20.529 14.769 - 14.868: 99.3078% ( 1) 00:07:20.529 14.868 - 14.966: 99.3161% ( 1) 00:07:20.529 14.966 - 15.065: 99.3244% ( 1) 00:07:20.529 15.065 - 15.163: 99.3411% ( 2) 00:07:20.529 15.163 - 15.262: 99.3745% ( 4) 00:07:20.529 15.458 - 15.557: 99.4078% ( 4) 00:07:20.529 15.557 - 15.655: 99.4162% ( 1) 00:07:20.529 15.655 - 15.754: 99.4245% ( 1) 00:07:20.529 15.754 - 15.852: 99.4412% ( 2) 00:07:20.529 15.852 - 15.951: 99.4495% ( 1) 00:07:20.529 15.951 - 16.049: 99.4579% ( 1) 00:07:20.529 16.049 - 16.148: 99.4662% ( 1) 00:07:20.529 16.148 - 16.246: 99.4829% ( 2) 00:07:20.529 16.345 - 16.443: 99.4912% ( 1) 00:07:20.529 16.443 - 16.542: 99.4996% ( 1) 00:07:20.529 16.542 - 16.640: 99.5163% ( 2) 00:07:20.529 17.231 - 17.329: 99.5329% ( 2) 00:07:20.529 17.723 - 17.822: 99.5413% ( 1) 00:07:20.529 17.920 - 18.018: 99.5496% ( 1) 00:07:20.529 18.314 - 18.412: 99.5580% ( 1) 00:07:20.529 19.200 - 19.298: 99.5663% ( 1) 00:07:20.529 19.692 - 19.791: 99.5746% ( 1) 00:07:20.529 19.791 - 19.889: 99.5830% ( 1) 00:07:20.529 20.185 - 20.283: 99.6080% ( 3) 00:07:20.529 20.283 - 20.382: 99.6163% ( 1) 00:07:20.529 20.382 - 20.480: 99.6580% ( 5) 00:07:20.529 20.480 - 20.578: 99.6664% ( 1) 00:07:20.529 20.677 - 20.775: 99.6747% ( 1) 00:07:20.529 20.775 - 20.874: 99.6997% ( 3) 00:07:20.529 20.874 - 20.972: 99.7248% ( 3) 00:07:20.529 20.972 - 21.071: 99.7331% ( 1) 00:07:20.529 21.071 - 21.169: 99.7415% ( 1) 00:07:20.529 21.268 - 21.366: 99.7498% ( 1) 00:07:20.529 21.957 - 22.055: 99.7581% ( 1) 00:07:20.529 22.449 - 22.548: 99.7665% ( 1) 00:07:20.529 22.646 - 22.745: 99.7748% ( 1) 00:07:20.529 22.745 - 22.843: 99.7832% ( 1) 00:07:20.529 23.138 - 23.237: 99.7915% ( 1) 00:07:20.529 23.434 - 23.532: 99.7998% ( 1) 00:07:20.529 23.532 - 23.631: 99.8082% ( 1) 00:07:20.529 24.025 - 24.123: 99.8165% ( 1) 00:07:20.529 24.418 - 24.517: 99.8332% ( 2) 00:07:20.529 24.615 - 24.714: 99.8415% ( 1) 00:07:20.529 28.357 - 28.554: 99.8499% ( 1) 00:07:20.529 29.735 - 29.932: 99.8582% ( 1) 00:07:20.529 31.508 - 31.705: 99.8666% ( 1) 00:07:20.529 33.083 - 33.280: 99.8749% ( 1) 00:07:20.529 33.280 - 33.477: 99.8832% ( 1) 00:07:20.529 35.643 - 35.840: 99.8916% ( 1) 00:07:20.529 36.037 - 36.234: 99.8999% ( 1) 00:07:20.529 40.172 - 40.369: 99.9083% ( 1) 00:07:20.529 42.142 - 42.338: 99.9166% ( 1) 00:07:20.529 42.338 - 42.535: 99.9249% ( 1) 00:07:20.529 45.489 - 45.686: 99.9333% ( 1) 00:07:20.529 48.443 - 48.640: 99.9416% ( 1) 00:07:20.529 50.806 - 51.200: 99.9500% ( 1) 00:07:20.529 54.351 - 54.745: 99.9583% ( 1) 00:07:20.529 55.138 - 55.532: 99.9666% ( 1) 00:07:20.529 63.015 - 63.409: 99.9750% ( 1) 00:07:20.529 248.911 - 250.486: 99.9833% ( 1) 00:07:20.529 269.391 - 270.966: 99.9917% ( 1) 00:07:20.529 356.037 - 357.612: 100.0000% ( 1) 00:07:20.529 00:07:20.529 00:07:20.529 real 0m1.182s 00:07:20.529 user 0m1.062s 00:07:20.529 sys 0m0.074s 00:07:20.529 02:54:36 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.529 ************************************ 00:07:20.529 END TEST nvme_overhead 00:07:20.529 ************************************ 00:07:20.529 02:54:36 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:20.529 02:54:36 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:20.529 02:54:36 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:20.529 02:54:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.529 02:54:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:20.529 ************************************ 00:07:20.529 START TEST nvme_arbitration 00:07:20.529 ************************************ 00:07:20.529 02:54:36 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:23.822 Initializing NVMe Controllers 00:07:23.822 Attached to 0000:00:10.0 00:07:23.822 Attached to 0000:00:11.0 00:07:23.822 Attached to 0000:00:13.0 00:07:23.822 Attached to 0000:00:12.0 00:07:23.822 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:23.822 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:23.822 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:23.822 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:23.822 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:23.822 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:23.822 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:23.822 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:23.822 Initialization complete. Launching workers. 00:07:23.822 Starting thread on core 1 with urgent priority queue 00:07:23.822 Starting thread on core 2 with urgent priority queue 00:07:23.822 Starting thread on core 3 with urgent priority queue 00:07:23.822 Starting thread on core 0 with urgent priority queue 00:07:23.822 QEMU NVMe Ctrl (12340 ) core 0: 5903.33 IO/s 16.94 secs/100000 ios 00:07:23.822 QEMU NVMe Ctrl (12342 ) core 0: 5909.33 IO/s 16.92 secs/100000 ios 00:07:23.822 QEMU NVMe Ctrl (12341 ) core 1: 5943.00 IO/s 16.83 secs/100000 ios 00:07:23.822 QEMU NVMe Ctrl (12342 ) core 1: 5930.67 IO/s 16.86 secs/100000 ios 00:07:23.822 QEMU NVMe Ctrl (12343 ) core 2: 5536.00 IO/s 18.06 secs/100000 ios 00:07:23.822 QEMU NVMe Ctrl (12342 ) core 3: 5526.00 IO/s 18.10 secs/100000 ios 00:07:23.822 ======================================================== 00:07:23.822 00:07:23.822 00:07:23.822 real 0m3.183s 00:07:23.822 user 0m8.981s 00:07:23.822 sys 0m0.098s 00:07:23.822 02:54:39 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.822 ************************************ 00:07:23.822 END TEST nvme_arbitration 00:07:23.822 02:54:39 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:23.822 ************************************ 00:07:23.822 02:54:39 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:23.822 02:54:39 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:23.822 02:54:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.822 02:54:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.822 ************************************ 00:07:23.822 START TEST nvme_single_aen 00:07:23.822 ************************************ 00:07:23.822 02:54:39 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:23.822 Asynchronous Event Request test 00:07:23.822 Attached to 0000:00:10.0 00:07:23.822 Attached to 0000:00:11.0 00:07:23.822 Attached to 0000:00:13.0 00:07:23.822 Attached to 0000:00:12.0 00:07:23.822 Reset controller to setup AER completions for this process 00:07:23.822 Registering asynchronous event callbacks... 00:07:23.822 Getting orig temperature thresholds of all controllers 00:07:23.822 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:23.822 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:23.822 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:23.822 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:23.822 Setting all controllers temperature threshold low to trigger AER 00:07:23.822 Waiting for all controllers temperature threshold to be set lower 00:07:23.822 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:23.822 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:23.822 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:23.823 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:23.823 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:23.823 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:23.823 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:23.823 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:23.823 Waiting for all controllers to trigger AER and reset threshold 00:07:23.823 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.823 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.823 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.823 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.823 Cleaning up... 00:07:23.823 00:07:23.823 real 0m0.189s 00:07:23.823 user 0m0.067s 00:07:23.823 sys 0m0.077s 00:07:23.823 02:54:39 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.823 ************************************ 00:07:23.823 END TEST nvme_single_aen 00:07:23.823 ************************************ 00:07:23.823 02:54:39 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:23.823 02:54:39 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:23.823 02:54:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.823 02:54:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.823 02:54:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.823 ************************************ 00:07:23.823 START TEST nvme_doorbell_aers 00:07:23.823 ************************************ 00:07:23.823 02:54:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:23.823 02:54:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:23.823 02:54:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:23.823 02:54:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:23.823 02:54:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:23.823 02:54:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:23.823 02:54:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:23.823 02:54:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:23.823 02:54:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:23.823 02:54:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:23.823 02:54:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:23.823 02:54:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:23.823 02:54:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:23.823 02:54:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:24.080 [2024-11-29 02:54:39.829647] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:07:34.078 Executing: test_write_invalid_db 00:07:34.078 Waiting for AER completion... 00:07:34.078 Failure: test_write_invalid_db 00:07:34.078 00:07:34.078 Executing: test_invalid_db_write_overflow_sq 00:07:34.078 Waiting for AER completion... 00:07:34.078 Failure: test_invalid_db_write_overflow_sq 00:07:34.078 00:07:34.078 Executing: test_invalid_db_write_overflow_cq 00:07:34.078 Waiting for AER completion... 00:07:34.078 Failure: test_invalid_db_write_overflow_cq 00:07:34.078 00:07:34.078 02:54:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:34.078 02:54:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:34.078 [2024-11-29 02:54:49.850054] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:07:44.044 Executing: test_write_invalid_db 00:07:44.044 Waiting for AER completion... 00:07:44.044 Failure: test_write_invalid_db 00:07:44.044 00:07:44.044 Executing: test_invalid_db_write_overflow_sq 00:07:44.045 Waiting for AER completion... 00:07:44.045 Failure: test_invalid_db_write_overflow_sq 00:07:44.045 00:07:44.045 Executing: test_invalid_db_write_overflow_cq 00:07:44.045 Waiting for AER completion... 00:07:44.045 Failure: test_invalid_db_write_overflow_cq 00:07:44.045 00:07:44.045 02:54:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:44.045 02:54:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:44.045 [2024-11-29 02:54:59.876013] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:07:54.050 Executing: test_write_invalid_db 00:07:54.050 Waiting for AER completion... 00:07:54.050 Failure: test_write_invalid_db 00:07:54.050 00:07:54.050 Executing: test_invalid_db_write_overflow_sq 00:07:54.050 Waiting for AER completion... 00:07:54.050 Failure: test_invalid_db_write_overflow_sq 00:07:54.050 00:07:54.050 Executing: test_invalid_db_write_overflow_cq 00:07:54.050 Waiting for AER completion... 00:07:54.050 Failure: test_invalid_db_write_overflow_cq 00:07:54.050 00:07:54.050 02:55:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:54.050 02:55:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:07:54.050 [2024-11-29 02:55:09.914012] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:08:04.048 Executing: test_write_invalid_db 00:08:04.048 Waiting for AER completion... 00:08:04.048 Failure: test_write_invalid_db 00:08:04.048 00:08:04.048 Executing: test_invalid_db_write_overflow_sq 00:08:04.048 Waiting for AER completion... 00:08:04.048 Failure: test_invalid_db_write_overflow_sq 00:08:04.048 00:08:04.048 Executing: test_invalid_db_write_overflow_cq 00:08:04.048 Waiting for AER completion... 00:08:04.048 Failure: test_invalid_db_write_overflow_cq 00:08:04.048 00:08:04.048 00:08:04.048 real 0m40.183s 00:08:04.048 user 0m34.132s 00:08:04.048 sys 0m5.689s 00:08:04.048 02:55:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.048 ************************************ 00:08:04.048 END TEST nvme_doorbell_aers 00:08:04.048 02:55:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:04.048 ************************************ 00:08:04.048 02:55:19 nvme -- nvme/nvme.sh@97 -- # uname 00:08:04.048 02:55:19 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:04.048 02:55:19 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:04.048 02:55:19 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:04.048 02:55:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.048 02:55:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.048 ************************************ 00:08:04.048 START TEST nvme_multi_aen 00:08:04.048 ************************************ 00:08:04.048 02:55:19 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:04.307 [2024-11-29 02:55:19.969682] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:08:04.307 [2024-11-29 02:55:19.969746] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:08:04.307 [2024-11-29 02:55:19.969760] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:08:04.307 Child process pid: 75159 00:08:04.307 [2024-11-29 02:55:19.971061] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:08:04.307 [2024-11-29 02:55:19.971106] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:08:04.307 [2024-11-29 02:55:19.971117] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:08:04.307 [2024-11-29 02:55:19.972243] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:08:04.307 [2024-11-29 02:55:19.972273] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:08:04.307 [2024-11-29 02:55:19.972283] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:08:04.307 [2024-11-29 02:55:19.973320] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:08:04.307 [2024-11-29 02:55:19.973351] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:08:04.307 [2024-11-29 02:55:19.973361] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74633) is not found. Dropping the request. 00:08:04.307 [Child] Asynchronous Event Request test 00:08:04.307 [Child] Attached to 0000:00:10.0 00:08:04.307 [Child] Attached to 0000:00:11.0 00:08:04.307 [Child] Attached to 0000:00:13.0 00:08:04.307 [Child] Attached to 0000:00:12.0 00:08:04.307 [Child] Registering asynchronous event callbacks... 00:08:04.307 [Child] Getting orig temperature thresholds of all controllers 00:08:04.307 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:04.307 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:04.307 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:04.307 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:04.307 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:04.307 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:04.307 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:04.307 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:04.307 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:04.307 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:04.307 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:04.307 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:04.307 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:04.307 [Child] Cleaning up... 00:08:04.307 Asynchronous Event Request test 00:08:04.307 Attached to 0000:00:10.0 00:08:04.307 Attached to 0000:00:11.0 00:08:04.307 Attached to 0000:00:13.0 00:08:04.307 Attached to 0000:00:12.0 00:08:04.307 Reset controller to setup AER completions for this process 00:08:04.307 Registering asynchronous event callbacks... 00:08:04.307 Getting orig temperature thresholds of all controllers 00:08:04.307 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:04.307 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:04.307 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:04.307 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:04.307 Setting all controllers temperature threshold low to trigger AER 00:08:04.307 Waiting for all controllers temperature threshold to be set lower 00:08:04.307 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:04.307 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:04.307 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:04.307 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:04.307 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:04.307 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:04.307 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:04.307 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:04.307 Waiting for all controllers to trigger AER and reset threshold 00:08:04.307 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:04.307 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:04.307 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:04.307 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:04.307 Cleaning up... 00:08:04.307 00:08:04.307 real 0m0.362s 00:08:04.307 user 0m0.120s 00:08:04.307 sys 0m0.144s 00:08:04.307 02:55:20 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.307 ************************************ 00:08:04.307 END TEST nvme_multi_aen 00:08:04.307 ************************************ 00:08:04.307 02:55:20 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:04.307 02:55:20 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:04.308 02:55:20 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:04.308 02:55:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.308 02:55:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.308 ************************************ 00:08:04.308 START TEST nvme_startup 00:08:04.308 ************************************ 00:08:04.308 02:55:20 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:04.566 Initializing NVMe Controllers 00:08:04.566 Attached to 0000:00:10.0 00:08:04.566 Attached to 0000:00:11.0 00:08:04.566 Attached to 0000:00:13.0 00:08:04.566 Attached to 0000:00:12.0 00:08:04.566 Initialization complete. 00:08:04.566 Time used:145869.391 (us). 00:08:04.566 00:08:04.566 real 0m0.194s 00:08:04.566 user 0m0.063s 00:08:04.566 sys 0m0.077s 00:08:04.566 02:55:20 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.566 ************************************ 00:08:04.566 END TEST nvme_startup 00:08:04.566 ************************************ 00:08:04.566 02:55:20 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:04.566 02:55:20 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:04.566 02:55:20 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.566 02:55:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.566 02:55:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.566 ************************************ 00:08:04.566 START TEST nvme_multi_secondary 00:08:04.566 ************************************ 00:08:04.566 02:55:20 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:04.566 02:55:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75210 00:08:04.566 02:55:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:04.566 02:55:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75211 00:08:04.566 02:55:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:04.566 02:55:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:07.844 Initializing NVMe Controllers 00:08:07.844 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:07.844 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:07.845 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:07.845 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:07.845 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:07.845 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:07.845 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:07.845 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:07.845 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:07.845 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:07.845 Initialization complete. Launching workers. 00:08:07.845 ======================================================== 00:08:07.845 Latency(us) 00:08:07.845 Device Information : IOPS MiB/s Average min max 00:08:07.845 PCIE (0000:00:10.0) NSID 1 from core 1: 7280.51 28.44 2196.21 982.70 6329.82 00:08:07.845 PCIE (0000:00:11.0) NSID 1 from core 1: 7280.51 28.44 2197.17 1100.15 6034.97 00:08:07.845 PCIE (0000:00:13.0) NSID 1 from core 1: 7280.51 28.44 2197.17 1102.93 5746.64 00:08:07.845 PCIE (0000:00:12.0) NSID 1 from core 1: 7280.51 28.44 2197.18 988.37 5818.97 00:08:07.845 PCIE (0000:00:12.0) NSID 2 from core 1: 7280.51 28.44 2197.19 1116.40 5797.40 00:08:07.845 PCIE (0000:00:12.0) NSID 3 from core 1: 7280.51 28.44 2197.17 1022.76 5694.74 00:08:07.845 ======================================================== 00:08:07.845 Total : 43683.07 170.64 2197.02 982.70 6329.82 00:08:07.845 00:08:07.845 Initializing NVMe Controllers 00:08:07.845 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:07.845 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:07.845 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:07.845 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:07.845 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:07.845 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:07.845 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:07.845 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:07.845 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:07.845 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:07.845 Initialization complete. Launching workers. 00:08:07.845 ======================================================== 00:08:07.845 Latency(us) 00:08:07.845 Device Information : IOPS MiB/s Average min max 00:08:07.845 PCIE (0000:00:10.0) NSID 1 from core 2: 3013.19 11.77 5307.30 928.27 13493.13 00:08:07.845 PCIE (0000:00:11.0) NSID 1 from core 2: 3013.19 11.77 5309.42 1072.12 16846.35 00:08:07.845 PCIE (0000:00:13.0) NSID 1 from core 2: 3013.19 11.77 5309.63 978.44 16226.68 00:08:07.845 PCIE (0000:00:12.0) NSID 1 from core 2: 3013.19 11.77 5309.65 1093.46 13081.66 00:08:07.845 PCIE (0000:00:12.0) NSID 2 from core 2: 3013.19 11.77 5310.02 1081.93 12882.30 00:08:07.845 PCIE (0000:00:12.0) NSID 3 from core 2: 3013.19 11.77 5310.02 929.60 13572.42 00:08:07.845 ======================================================== 00:08:07.845 Total : 18079.15 70.62 5309.34 928.27 16846.35 00:08:07.845 00:08:08.103 02:55:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75210 00:08:10.002 Initializing NVMe Controllers 00:08:10.002 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:10.002 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:10.002 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:10.002 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:10.002 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:10.002 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:10.002 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:10.002 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:10.002 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:10.002 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:10.002 Initialization complete. Launching workers. 00:08:10.002 ======================================================== 00:08:10.002 Latency(us) 00:08:10.002 Device Information : IOPS MiB/s Average min max 00:08:10.002 PCIE (0000:00:10.0) NSID 1 from core 0: 10237.59 39.99 1561.57 685.19 5790.81 00:08:10.002 PCIE (0000:00:11.0) NSID 1 from core 0: 10237.59 39.99 1562.46 704.17 5632.09 00:08:10.002 PCIE (0000:00:13.0) NSID 1 from core 0: 10237.59 39.99 1562.45 707.61 5982.64 00:08:10.002 PCIE (0000:00:12.0) NSID 1 from core 0: 10237.59 39.99 1562.45 705.81 5472.62 00:08:10.002 PCIE (0000:00:12.0) NSID 2 from core 0: 10237.59 39.99 1562.44 709.38 5136.87 00:08:10.002 PCIE (0000:00:12.0) NSID 3 from core 0: 10237.59 39.99 1562.44 684.86 5280.79 00:08:10.002 ======================================================== 00:08:10.002 Total : 61425.53 239.94 1562.30 684.86 5982.64 00:08:10.002 00:08:10.002 02:55:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75211 00:08:10.002 02:55:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75280 00:08:10.002 02:55:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75281 00:08:10.002 02:55:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:10.002 02:55:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:10.002 02:55:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:13.280 Initializing NVMe Controllers 00:08:13.280 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:13.280 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:13.280 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:13.280 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:13.280 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:13.280 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:13.280 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:13.280 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:13.280 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:13.280 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:13.280 Initialization complete. Launching workers. 00:08:13.280 ======================================================== 00:08:13.280 Latency(us) 00:08:13.280 Device Information : IOPS MiB/s Average min max 00:08:13.280 PCIE (0000:00:10.0) NSID 1 from core 1: 7663.16 29.93 2086.25 760.40 5650.73 00:08:13.280 PCIE (0000:00:11.0) NSID 1 from core 1: 7663.16 29.93 2087.16 778.19 6054.81 00:08:13.280 PCIE (0000:00:13.0) NSID 1 from core 1: 7663.16 29.93 2087.09 777.97 5597.67 00:08:13.280 PCIE (0000:00:12.0) NSID 1 from core 1: 7663.16 29.93 2087.02 786.00 5649.37 00:08:13.280 PCIE (0000:00:12.0) NSID 2 from core 1: 7663.16 29.93 2086.96 780.73 5750.13 00:08:13.280 PCIE (0000:00:12.0) NSID 3 from core 1: 7663.16 29.93 2086.89 784.25 5705.81 00:08:13.280 ======================================================== 00:08:13.280 Total : 45978.96 179.61 2086.89 760.40 6054.81 00:08:13.280 00:08:13.280 Initializing NVMe Controllers 00:08:13.280 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:13.280 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:13.280 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:13.280 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:13.280 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:13.280 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:13.280 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:13.280 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:13.280 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:13.280 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:13.280 Initialization complete. Launching workers. 00:08:13.280 ======================================================== 00:08:13.280 Latency(us) 00:08:13.280 Device Information : IOPS MiB/s Average min max 00:08:13.280 PCIE (0000:00:10.0) NSID 1 from core 0: 7694.14 30.06 2078.09 738.99 5864.41 00:08:13.280 PCIE (0000:00:11.0) NSID 1 from core 0: 7694.14 30.06 2078.97 758.79 6261.77 00:08:13.280 PCIE (0000:00:13.0) NSID 1 from core 0: 7694.14 30.06 2078.89 658.63 5999.83 00:08:13.280 PCIE (0000:00:12.0) NSID 1 from core 0: 7694.14 30.06 2078.81 562.02 5433.98 00:08:13.280 PCIE (0000:00:12.0) NSID 2 from core 0: 7694.14 30.06 2078.73 499.32 5684.16 00:08:13.280 PCIE (0000:00:12.0) NSID 3 from core 0: 7694.14 30.06 2078.63 440.99 6023.22 00:08:13.280 ======================================================== 00:08:13.280 Total : 46164.83 180.33 2078.69 440.99 6261.77 00:08:13.280 00:08:15.182 Initializing NVMe Controllers 00:08:15.182 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:15.182 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:15.182 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:15.182 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:15.182 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:15.182 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:15.182 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:15.182 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:15.182 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:15.182 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:15.182 Initialization complete. Launching workers. 00:08:15.182 ======================================================== 00:08:15.182 Latency(us) 00:08:15.182 Device Information : IOPS MiB/s Average min max 00:08:15.182 PCIE (0000:00:10.0) NSID 1 from core 2: 4494.66 17.56 3558.22 763.53 12572.87 00:08:15.182 PCIE (0000:00:11.0) NSID 1 from core 2: 4494.66 17.56 3559.05 734.62 12972.33 00:08:15.182 PCIE (0000:00:13.0) NSID 1 from core 2: 4494.66 17.56 3559.40 776.44 12553.97 00:08:15.182 PCIE (0000:00:12.0) NSID 1 from core 2: 4494.66 17.56 3559.03 791.59 12793.15 00:08:15.182 PCIE (0000:00:12.0) NSID 2 from core 2: 4494.66 17.56 3559.37 779.93 12978.32 00:08:15.182 PCIE (0000:00:12.0) NSID 3 from core 2: 4494.66 17.56 3559.16 784.78 12780.44 00:08:15.182 ======================================================== 00:08:15.182 Total : 26967.96 105.34 3559.04 734.62 12978.32 00:08:15.182 00:08:15.182 02:55:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75280 00:08:15.182 02:55:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75281 00:08:15.182 00:08:15.182 real 0m10.512s 00:08:15.182 user 0m18.307s 00:08:15.182 sys 0m0.516s 00:08:15.182 02:55:30 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.182 02:55:30 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:15.182 ************************************ 00:08:15.182 END TEST nvme_multi_secondary 00:08:15.182 ************************************ 00:08:15.182 02:55:30 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:15.182 02:55:30 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:15.182 02:55:30 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74248 ]] 00:08:15.182 02:55:30 nvme -- common/autotest_common.sh@1094 -- # kill 74248 00:08:15.182 02:55:30 nvme -- common/autotest_common.sh@1095 -- # wait 74248 00:08:15.182 [2024-11-29 02:55:30.997788] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 [2024-11-29 02:55:30.997966] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 [2024-11-29 02:55:30.998002] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 [2024-11-29 02:55:30.998038] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 [2024-11-29 02:55:30.998985] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 [2024-11-29 02:55:30.999069] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 [2024-11-29 02:55:30.999098] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 [2024-11-29 02:55:30.999130] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 [2024-11-29 02:55:30.999824] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 [2024-11-29 02:55:30.999897] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 [2024-11-29 02:55:30.999916] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 [2024-11-29 02:55:30.999939] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 [2024-11-29 02:55:31.000566] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 [2024-11-29 02:55:31.000631] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 [2024-11-29 02:55:31.000651] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 [2024-11-29 02:55:31.000672] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75158) is not found. Dropping the request. 00:08:15.182 02:55:31 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:15.182 02:55:31 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:15.182 02:55:31 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:15.182 02:55:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.182 02:55:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.182 02:55:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:15.182 ************************************ 00:08:15.182 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:15.182 ************************************ 00:08:15.182 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:15.182 * Looking for test storage... 00:08:15.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:15.182 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:15.182 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:15.182 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:15.444 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:15.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.445 --rc genhtml_branch_coverage=1 00:08:15.445 --rc genhtml_function_coverage=1 00:08:15.445 --rc genhtml_legend=1 00:08:15.445 --rc geninfo_all_blocks=1 00:08:15.445 --rc geninfo_unexecuted_blocks=1 00:08:15.445 00:08:15.445 ' 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:15.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.445 --rc genhtml_branch_coverage=1 00:08:15.445 --rc genhtml_function_coverage=1 00:08:15.445 --rc genhtml_legend=1 00:08:15.445 --rc geninfo_all_blocks=1 00:08:15.445 --rc geninfo_unexecuted_blocks=1 00:08:15.445 00:08:15.445 ' 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:15.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.445 --rc genhtml_branch_coverage=1 00:08:15.445 --rc genhtml_function_coverage=1 00:08:15.445 --rc genhtml_legend=1 00:08:15.445 --rc geninfo_all_blocks=1 00:08:15.445 --rc geninfo_unexecuted_blocks=1 00:08:15.445 00:08:15.445 ' 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:15.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.445 --rc genhtml_branch_coverage=1 00:08:15.445 --rc genhtml_function_coverage=1 00:08:15.445 --rc genhtml_legend=1 00:08:15.445 --rc geninfo_all_blocks=1 00:08:15.445 --rc geninfo_unexecuted_blocks=1 00:08:15.445 00:08:15.445 ' 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75442 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75442 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 75442 ']' 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:15.445 02:55:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:15.445 [2024-11-29 02:55:31.354596] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:15.445 [2024-11-29 02:55:31.354743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75442 ] 00:08:15.704 [2024-11-29 02:55:31.511565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.704 [2024-11-29 02:55:31.533643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.704 [2024-11-29 02:55:31.533970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.704 [2024-11-29 02:55:31.534396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.704 [2024-11-29 02:55:31.534475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.270 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.270 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:16.270 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:16.270 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.270 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:16.270 nvme0n1 00:08:16.270 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.270 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:16.270 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_a1Cfv.txt 00:08:16.270 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:16.270 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.270 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:16.270 true 00:08:16.270 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.270 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:16.528 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732848932 00:08:16.528 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75464 00:08:16.528 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:16.528 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:16.528 02:55:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:18.426 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:18.427 [2024-11-29 02:55:34.273741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:18.427 [2024-11-29 02:55:34.274002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:18.427 [2024-11-29 02:55:34.274035] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:18.427 [2024-11-29 02:55:34.274052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:18.427 [2024-11-29 02:55:34.275915] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.427 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75464 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75464 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75464 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_a1Cfv.txt 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_a1Cfv.txt 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75442 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 75442 ']' 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 75442 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75442 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.427 killing process with pid 75442 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75442' 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 75442 00:08:18.427 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 75442 00:08:18.686 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:18.686 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:18.686 00:08:18.686 real 0m3.536s 00:08:18.686 user 0m12.651s 00:08:18.686 sys 0m0.457s 00:08:18.686 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.686 ************************************ 00:08:18.686 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:18.686 ************************************ 00:08:18.686 02:55:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:18.686 02:55:34 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:18.686 02:55:34 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:18.686 02:55:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.686 02:55:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.686 02:55:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:18.686 ************************************ 00:08:18.686 START TEST nvme_fio 00:08:18.686 ************************************ 00:08:18.686 02:55:34 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:18.686 02:55:34 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:18.686 02:55:34 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:18.686 02:55:34 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:18.686 02:55:34 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:18.686 02:55:34 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:18.686 02:55:34 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:18.686 02:55:34 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:18.686 02:55:34 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:18.943 02:55:34 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:18.943 02:55:34 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:18.943 02:55:34 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:18.944 02:55:34 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:18.944 02:55:34 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:18.944 02:55:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:18.944 02:55:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:18.944 02:55:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:18.944 02:55:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:19.202 02:55:35 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:19.202 02:55:35 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:19.202 02:55:35 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:19.461 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:19.461 fio-3.35 00:08:19.461 Starting 1 thread 00:08:26.045 00:08:26.045 test: (groupid=0, jobs=1): err= 0: pid=75588: Fri Nov 29 02:55:40 2024 00:08:26.045 read: IOPS=18.9k, BW=74.0MiB/s (77.6MB/s)(148MiB/2001msec) 00:08:26.045 slat (nsec): min=3273, max=64674, avg=5565.80, stdev=2955.96 00:08:26.045 clat (usec): min=577, max=11260, avg=3356.81, stdev=1190.43 00:08:26.045 lat (usec): min=582, max=11314, avg=3362.38, stdev=1191.74 00:08:26.045 clat percentiles (usec): 00:08:26.045 | 1.00th=[ 1876], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2442], 00:08:26.045 | 30.00th=[ 2573], 40.00th=[ 2737], 50.00th=[ 2900], 60.00th=[ 3163], 00:08:26.045 | 70.00th=[ 3556], 80.00th=[ 4293], 90.00th=[ 5211], 95.00th=[ 5866], 00:08:26.045 | 99.00th=[ 6849], 99.50th=[ 7373], 99.90th=[ 8717], 99.95th=[ 9896], 00:08:26.045 | 99.99th=[10945] 00:08:26.045 bw ( KiB/s): min=72984, max=80680, per=100.00%, avg=77853.33, stdev=4235.15, samples=3 00:08:26.045 iops : min=18246, max=20170, avg=19463.33, stdev=1058.79, samples=3 00:08:26.045 write: IOPS=18.9k, BW=74.0MiB/s (77.6MB/s)(148MiB/2001msec); 0 zone resets 00:08:26.045 slat (nsec): min=3340, max=69731, avg=5806.16, stdev=3133.70 00:08:26.045 clat (usec): min=587, max=11051, avg=3378.85, stdev=1194.64 00:08:26.045 lat (usec): min=592, max=11064, avg=3384.66, stdev=1195.97 00:08:26.045 clat percentiles (usec): 00:08:26.045 | 1.00th=[ 1860], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2474], 00:08:26.045 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2933], 60.00th=[ 3195], 00:08:26.045 | 70.00th=[ 3589], 80.00th=[ 4359], 90.00th=[ 5276], 95.00th=[ 5866], 00:08:26.045 | 99.00th=[ 6849], 99.50th=[ 7308], 99.90th=[ 8717], 99.95th=[ 9765], 00:08:26.045 | 99.99th=[10814] 00:08:26.045 bw ( KiB/s): min=73320, max=81000, per=100.00%, avg=78018.67, stdev=4117.95, samples=3 00:08:26.045 iops : min=18330, max=20250, avg=19504.67, stdev=1029.49, samples=3 00:08:26.045 lat (usec) : 750=0.01%, 1000=0.02% 00:08:26.045 lat (msec) : 2=1.55%, 4=74.34%, 10=24.04%, 20=0.04% 00:08:26.045 cpu : usr=98.95%, sys=0.00%, ctx=3, majf=0, minf=625 00:08:26.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:26.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:26.045 issued rwts: total=37899,37918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:26.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:26.045 00:08:26.045 Run status group 0 (all jobs): 00:08:26.045 READ: bw=74.0MiB/s (77.6MB/s), 74.0MiB/s-74.0MiB/s (77.6MB/s-77.6MB/s), io=148MiB (155MB), run=2001-2001msec 00:08:26.045 WRITE: bw=74.0MiB/s (77.6MB/s), 74.0MiB/s-74.0MiB/s (77.6MB/s-77.6MB/s), io=148MiB (155MB), run=2001-2001msec 00:08:26.045 ----------------------------------------------------- 00:08:26.045 Suppressions used: 00:08:26.045 count bytes template 00:08:26.045 1 32 /usr/src/fio/parse.c 00:08:26.045 1 8 libtcmalloc_minimal.so 00:08:26.045 ----------------------------------------------------- 00:08:26.045 00:08:26.045 02:55:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:26.045 02:55:40 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:26.045 02:55:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:26.045 02:55:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:26.045 02:55:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:26.045 02:55:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:26.045 02:55:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:26.045 02:55:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:26.045 02:55:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:26.045 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:26.045 fio-3.35 00:08:26.045 Starting 1 thread 00:08:32.628 00:08:32.629 test: (groupid=0, jobs=1): err= 0: pid=75643: Fri Nov 29 02:55:47 2024 00:08:32.629 read: IOPS=20.2k, BW=78.7MiB/s (82.6MB/s)(158MiB/2001msec) 00:08:32.629 slat (nsec): min=4219, max=70977, avg=5304.75, stdev=2624.14 00:08:32.629 clat (usec): min=245, max=13063, avg=3166.07, stdev=1148.52 00:08:32.629 lat (usec): min=249, max=13094, avg=3171.37, stdev=1149.72 00:08:32.629 clat percentiles (usec): 00:08:32.629 | 1.00th=[ 2024], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2442], 00:08:32.629 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2868], 00:08:32.629 | 70.00th=[ 3130], 80.00th=[ 3818], 90.00th=[ 5014], 95.00th=[ 5669], 00:08:32.629 | 99.00th=[ 6849], 99.50th=[ 7635], 99.90th=[ 9241], 99.95th=[10028], 00:08:32.629 | 99.99th=[12387] 00:08:32.629 bw ( KiB/s): min=72064, max=87088, per=99.12%, avg=79920.00, stdev=7535.59, samples=3 00:08:32.629 iops : min=18016, max=21772, avg=19980.00, stdev=1883.90, samples=3 00:08:32.629 write: IOPS=20.1k, BW=78.6MiB/s (82.4MB/s)(157MiB/2001msec); 0 zone resets 00:08:32.629 slat (nsec): min=4303, max=69696, avg=5530.72, stdev=2551.80 00:08:32.629 clat (usec): min=193, max=12476, avg=3172.24, stdev=1140.15 00:08:32.629 lat (usec): min=198, max=12485, avg=3177.77, stdev=1141.30 00:08:32.629 clat percentiles (usec): 00:08:32.629 | 1.00th=[ 2024], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2442], 00:08:32.629 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2868], 00:08:32.629 | 70.00th=[ 3130], 80.00th=[ 3851], 90.00th=[ 5014], 95.00th=[ 5669], 00:08:32.629 | 99.00th=[ 6915], 99.50th=[ 7570], 99.90th=[ 9110], 99.95th=[10159], 00:08:32.629 | 99.99th=[11731] 00:08:32.629 bw ( KiB/s): min=71952, max=87544, per=99.38%, avg=79941.33, stdev=7803.19, samples=3 00:08:32.629 iops : min=17988, max=21886, avg=19985.33, stdev=1950.80, samples=3 00:08:32.629 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:08:32.629 lat (msec) : 2=0.84%, 4=80.49%, 10=18.55%, 20=0.06% 00:08:32.629 cpu : usr=98.85%, sys=0.25%, ctx=3, majf=0, minf=624 00:08:32.629 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:32.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.629 issued rwts: total=40336,40239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.629 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.629 00:08:32.629 Run status group 0 (all jobs): 00:08:32.629 READ: bw=78.7MiB/s (82.6MB/s), 78.7MiB/s-78.7MiB/s (82.6MB/s-82.6MB/s), io=158MiB (165MB), run=2001-2001msec 00:08:32.629 WRITE: bw=78.6MiB/s (82.4MB/s), 78.6MiB/s-78.6MiB/s (82.4MB/s-82.4MB/s), io=157MiB (165MB), run=2001-2001msec 00:08:32.629 ----------------------------------------------------- 00:08:32.629 Suppressions used: 00:08:32.629 count bytes template 00:08:32.629 1 32 /usr/src/fio/parse.c 00:08:32.629 1 8 libtcmalloc_minimal.so 00:08:32.629 ----------------------------------------------------- 00:08:32.629 00:08:32.629 02:55:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:32.629 02:55:47 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:32.629 02:55:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:32.629 02:55:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:32.629 02:55:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:32.629 02:55:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:32.629 02:55:48 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:32.629 02:55:48 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:32.629 02:55:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:32.629 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:32.629 fio-3.35 00:08:32.629 Starting 1 thread 00:08:40.774 00:08:40.774 test: (groupid=0, jobs=1): err= 0: pid=75698: Fri Nov 29 02:55:55 2024 00:08:40.774 read: IOPS=20.3k, BW=79.3MiB/s (83.1MB/s)(159MiB/2001msec) 00:08:40.774 slat (nsec): min=4231, max=51645, avg=5252.42, stdev=2255.91 00:08:40.774 clat (usec): min=341, max=9210, avg=3139.40, stdev=964.84 00:08:40.774 lat (usec): min=345, max=9261, avg=3144.65, stdev=965.79 00:08:40.774 clat percentiles (usec): 00:08:40.774 | 1.00th=[ 1991], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2474], 00:08:40.774 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2802], 60.00th=[ 2933], 00:08:40.774 | 70.00th=[ 3195], 80.00th=[ 3687], 90.00th=[ 4555], 95.00th=[ 5276], 00:08:40.774 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 8356], 00:08:40.774 | 99.99th=[ 9110] 00:08:40.774 bw ( KiB/s): min=76472, max=82786, per=97.33%, avg=78995.33, stdev=3342.34, samples=3 00:08:40.774 iops : min=19118, max=20698, avg=19749.33, stdev=836.44, samples=3 00:08:40.774 write: IOPS=20.2k, BW=79.1MiB/s (82.9MB/s)(158MiB/2001msec); 0 zone resets 00:08:40.774 slat (nsec): min=4297, max=62760, avg=5414.84, stdev=2339.54 00:08:40.774 clat (usec): min=225, max=9148, avg=3156.14, stdev=966.17 00:08:40.774 lat (usec): min=230, max=9159, avg=3161.55, stdev=967.12 00:08:40.774 clat percentiles (usec): 00:08:40.774 | 1.00th=[ 2008], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2474], 00:08:40.774 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2966], 00:08:40.774 | 70.00th=[ 3228], 80.00th=[ 3720], 90.00th=[ 4621], 95.00th=[ 5276], 00:08:40.775 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7570], 99.95th=[ 8455], 00:08:40.775 | 99.99th=[ 8979] 00:08:40.775 bw ( KiB/s): min=76496, max=83177, per=97.63%, avg=79067.00, stdev=3596.57, samples=3 00:08:40.775 iops : min=19124, max=20794, avg=19766.67, stdev=899.00, samples=3 00:08:40.775 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:08:40.775 lat (msec) : 2=0.97%, 4=82.72%, 10=16.28% 00:08:40.775 cpu : usr=98.95%, sys=0.15%, ctx=30, majf=0, minf=624 00:08:40.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:40.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:40.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:40.775 issued rwts: total=40601,40513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:40.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:40.775 00:08:40.775 Run status group 0 (all jobs): 00:08:40.775 READ: bw=79.3MiB/s (83.1MB/s), 79.3MiB/s-79.3MiB/s (83.1MB/s-83.1MB/s), io=159MiB (166MB), run=2001-2001msec 00:08:40.775 WRITE: bw=79.1MiB/s (82.9MB/s), 79.1MiB/s-79.1MiB/s (82.9MB/s-82.9MB/s), io=158MiB (166MB), run=2001-2001msec 00:08:40.775 ----------------------------------------------------- 00:08:40.775 Suppressions used: 00:08:40.775 count bytes template 00:08:40.775 1 32 /usr/src/fio/parse.c 00:08:40.775 1 8 libtcmalloc_minimal.so 00:08:40.775 ----------------------------------------------------- 00:08:40.775 00:08:40.775 02:55:55 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:40.775 02:55:55 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:40.775 02:55:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:40.775 02:55:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:40.775 02:55:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:40.775 02:55:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:40.775 02:55:55 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:40.775 02:55:55 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:40.775 02:55:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:40.775 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:40.775 fio-3.35 00:08:40.775 Starting 1 thread 00:08:46.062 00:08:46.062 test: (groupid=0, jobs=1): err= 0: pid=75754: Fri Nov 29 02:56:01 2024 00:08:46.062 read: IOPS=15.8k, BW=61.6MiB/s (64.5MB/s)(123MiB/2002msec) 00:08:46.062 slat (nsec): min=4899, max=76092, avg=6593.13, stdev=3664.64 00:08:46.063 clat (usec): min=1008, max=15064, avg=4029.14, stdev=1363.36 00:08:46.063 lat (usec): min=1013, max=15127, avg=4035.73, stdev=1364.54 00:08:46.063 clat percentiles (usec): 00:08:46.063 | 1.00th=[ 2040], 5.00th=[ 2671], 10.00th=[ 2802], 20.00th=[ 2966], 00:08:46.063 | 30.00th=[ 3097], 40.00th=[ 3261], 50.00th=[ 3458], 60.00th=[ 3916], 00:08:46.063 | 70.00th=[ 4555], 80.00th=[ 5276], 90.00th=[ 6063], 95.00th=[ 6652], 00:08:46.063 | 99.00th=[ 7701], 99.50th=[ 8291], 99.90th=[10421], 99.95th=[12649], 00:08:46.063 | 99.99th=[14877] 00:08:46.063 bw ( KiB/s): min=61453, max=66976, per=100.00%, avg=64897.67, stdev=3004.34, samples=3 00:08:46.063 iops : min=15363, max=16744, avg=16224.33, stdev=751.23, samples=3 00:08:46.063 write: IOPS=15.8k, BW=61.6MiB/s (64.6MB/s)(123MiB/2002msec); 0 zone resets 00:08:46.063 slat (usec): min=4, max=162, avg= 6.79, stdev= 3.75 00:08:46.063 clat (usec): min=999, max=14932, avg=4062.97, stdev=1366.32 00:08:46.063 lat (usec): min=1004, max=14947, avg=4069.76, stdev=1367.53 00:08:46.063 clat percentiles (usec): 00:08:46.063 | 1.00th=[ 2114], 5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 2999], 00:08:46.063 | 30.00th=[ 3130], 40.00th=[ 3294], 50.00th=[ 3490], 60.00th=[ 3949], 00:08:46.063 | 70.00th=[ 4555], 80.00th=[ 5276], 90.00th=[ 6063], 95.00th=[ 6718], 00:08:46.063 | 99.00th=[ 7767], 99.50th=[ 8356], 99.90th=[10814], 99.95th=[12911], 00:08:46.063 | 99.99th=[14222] 00:08:46.063 bw ( KiB/s): min=61764, max=66328, per=100.00%, avg=64649.33, stdev=2509.89, samples=3 00:08:46.063 iops : min=15441, max=16582, avg=16162.33, stdev=627.47, samples=3 00:08:46.063 lat (usec) : 1000=0.01% 00:08:46.063 lat (msec) : 2=0.87%, 4=60.55%, 10=38.45%, 20=0.12% 00:08:46.063 cpu : usr=98.40%, sys=0.05%, ctx=14, majf=0, minf=624 00:08:46.063 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:46.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.063 issued rwts: total=31547,31569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.063 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.063 00:08:46.063 Run status group 0 (all jobs): 00:08:46.063 READ: bw=61.6MiB/s (64.5MB/s), 61.6MiB/s-61.6MiB/s (64.5MB/s-64.5MB/s), io=123MiB (129MB), run=2002-2002msec 00:08:46.063 WRITE: bw=61.6MiB/s (64.6MB/s), 61.6MiB/s-61.6MiB/s (64.6MB/s-64.6MB/s), io=123MiB (129MB), run=2002-2002msec 00:08:46.063 ----------------------------------------------------- 00:08:46.063 Suppressions used: 00:08:46.063 count bytes template 00:08:46.063 1 32 /usr/src/fio/parse.c 00:08:46.063 1 8 libtcmalloc_minimal.so 00:08:46.063 ----------------------------------------------------- 00:08:46.063 00:08:46.063 ************************************ 00:08:46.063 END TEST nvme_fio 00:08:46.063 ************************************ 00:08:46.063 02:56:01 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:46.063 02:56:01 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:46.063 00:08:46.063 real 0m26.940s 00:08:46.063 user 0m18.674s 00:08:46.063 sys 0m13.423s 00:08:46.063 02:56:01 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.063 02:56:01 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:46.063 00:08:46.063 real 1m33.943s 00:08:46.063 user 3m33.090s 00:08:46.063 sys 0m23.309s 00:08:46.063 02:56:01 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.063 ************************************ 00:08:46.063 END TEST nvme 00:08:46.063 ************************************ 00:08:46.063 02:56:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.063 02:56:01 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:46.063 02:56:01 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:46.063 02:56:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.063 02:56:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.063 02:56:01 -- common/autotest_common.sh@10 -- # set +x 00:08:46.063 ************************************ 00:08:46.063 START TEST nvme_scc 00:08:46.063 ************************************ 00:08:46.063 02:56:01 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:46.063 * Looking for test storage... 00:08:46.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:46.063 02:56:01 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:46.063 02:56:01 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:46.063 02:56:01 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:46.063 02:56:01 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:46.063 02:56:01 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.063 02:56:01 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:46.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.063 --rc genhtml_branch_coverage=1 00:08:46.063 --rc genhtml_function_coverage=1 00:08:46.063 --rc genhtml_legend=1 00:08:46.063 --rc geninfo_all_blocks=1 00:08:46.063 --rc geninfo_unexecuted_blocks=1 00:08:46.063 00:08:46.063 ' 00:08:46.063 02:56:01 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:46.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.063 --rc genhtml_branch_coverage=1 00:08:46.063 --rc genhtml_function_coverage=1 00:08:46.063 --rc genhtml_legend=1 00:08:46.063 --rc geninfo_all_blocks=1 00:08:46.063 --rc geninfo_unexecuted_blocks=1 00:08:46.063 00:08:46.063 ' 00:08:46.063 02:56:01 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:46.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.063 --rc genhtml_branch_coverage=1 00:08:46.063 --rc genhtml_function_coverage=1 00:08:46.063 --rc genhtml_legend=1 00:08:46.063 --rc geninfo_all_blocks=1 00:08:46.063 --rc geninfo_unexecuted_blocks=1 00:08:46.063 00:08:46.063 ' 00:08:46.063 02:56:01 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:46.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.063 --rc genhtml_branch_coverage=1 00:08:46.063 --rc genhtml_function_coverage=1 00:08:46.063 --rc genhtml_legend=1 00:08:46.063 --rc geninfo_all_blocks=1 00:08:46.063 --rc geninfo_unexecuted_blocks=1 00:08:46.063 00:08:46.063 ' 00:08:46.063 02:56:01 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:46.063 02:56:01 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:46.063 02:56:01 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:46.063 02:56:01 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:46.063 02:56:01 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.063 02:56:01 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.063 02:56:01 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.063 02:56:01 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.063 02:56:01 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.063 02:56:01 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:46.064 02:56:01 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.064 02:56:01 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:46.064 02:56:01 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:46.064 02:56:01 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:46.064 02:56:01 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:46.064 02:56:01 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:46.064 02:56:01 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:46.064 02:56:01 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:46.064 02:56:01 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:46.064 02:56:01 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:46.064 02:56:01 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.064 02:56:01 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:46.064 02:56:01 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:46.064 02:56:01 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:46.064 02:56:01 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:46.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:46.589 Waiting for block devices as requested 00:08:46.589 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:46.589 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:46.589 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:46.848 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:52.125 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:52.125 02:56:07 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:52.125 02:56:07 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:52.125 02:56:07 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:52.125 02:56:07 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.125 02:56:07 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.125 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.126 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.127 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.128 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.129 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.130 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:52.131 02:56:07 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:52.131 02:56:07 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:52.131 02:56:07 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.131 02:56:07 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:52.131 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.132 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:52.133 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:08:52.134 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.135 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.136 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:52.137 02:56:07 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:52.137 02:56:07 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:52.137 02:56:07 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.137 02:56:07 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.137 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.138 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.139 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.140 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.141 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.142 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:08:52.143 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:08:52.144 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.145 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.146 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:52.147 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:07 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:52.148 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.149 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:52.150 02:56:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:52.150 02:56:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:52.150 02:56:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.150 02:56:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.150 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.151 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.152 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:52.153 02:56:08 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:52.153 02:56:08 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:52.154 02:56:08 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:52.154 02:56:08 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:52.154 02:56:08 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:52.154 02:56:08 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:52.154 02:56:08 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:08:52.412 02:56:08 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:08:52.412 02:56:08 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:08:52.412 02:56:08 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:08:52.412 02:56:08 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:52.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:53.238 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:53.238 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:53.238 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:53.238 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:53.238 02:56:09 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:53.238 02:56:09 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:53.238 02:56:09 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.238 02:56:09 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:53.238 ************************************ 00:08:53.238 START TEST nvme_simple_copy 00:08:53.238 ************************************ 00:08:53.238 02:56:09 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:53.496 Initializing NVMe Controllers 00:08:53.496 Attaching to 0000:00:10.0 00:08:53.496 Controller supports SCC. Attached to 0000:00:10.0 00:08:53.496 Namespace ID: 1 size: 6GB 00:08:53.496 Initialization complete. 00:08:53.496 00:08:53.496 Controller QEMU NVMe Ctrl (12340 ) 00:08:53.496 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:08:53.496 Namespace Block Size:4096 00:08:53.496 Writing LBAs 0 to 63 with Random Data 00:08:53.496 Copied LBAs from 0 - 63 to the Destination LBA 256 00:08:53.496 LBAs matching Written Data: 64 00:08:53.496 00:08:53.496 real 0m0.221s 00:08:53.496 user 0m0.066s 00:08:53.496 sys 0m0.053s 00:08:53.496 ************************************ 00:08:53.496 END TEST nvme_simple_copy 00:08:53.496 ************************************ 00:08:53.496 02:56:09 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.497 02:56:09 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:08:53.497 ************************************ 00:08:53.497 END TEST nvme_scc 00:08:53.497 ************************************ 00:08:53.497 00:08:53.497 real 0m7.617s 00:08:53.497 user 0m1.037s 00:08:53.497 sys 0m1.362s 00:08:53.497 02:56:09 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.497 02:56:09 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:53.497 02:56:09 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:08:53.497 02:56:09 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:08:53.497 02:56:09 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:08:53.497 02:56:09 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:08:53.497 02:56:09 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:08:53.497 02:56:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.497 02:56:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.497 02:56:09 -- common/autotest_common.sh@10 -- # set +x 00:08:53.497 ************************************ 00:08:53.497 START TEST nvme_fdp 00:08:53.497 ************************************ 00:08:53.497 02:56:09 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:08:53.497 * Looking for test storage... 00:08:53.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:53.497 02:56:09 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:53.497 02:56:09 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:53.497 02:56:09 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:53.755 02:56:09 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.755 02:56:09 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:08:53.755 02:56:09 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.755 02:56:09 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:53.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.755 --rc genhtml_branch_coverage=1 00:08:53.755 --rc genhtml_function_coverage=1 00:08:53.755 --rc genhtml_legend=1 00:08:53.755 --rc geninfo_all_blocks=1 00:08:53.755 --rc geninfo_unexecuted_blocks=1 00:08:53.755 00:08:53.755 ' 00:08:53.755 02:56:09 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:53.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.755 --rc genhtml_branch_coverage=1 00:08:53.755 --rc genhtml_function_coverage=1 00:08:53.755 --rc genhtml_legend=1 00:08:53.755 --rc geninfo_all_blocks=1 00:08:53.755 --rc geninfo_unexecuted_blocks=1 00:08:53.755 00:08:53.755 ' 00:08:53.755 02:56:09 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:53.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.755 --rc genhtml_branch_coverage=1 00:08:53.755 --rc genhtml_function_coverage=1 00:08:53.755 --rc genhtml_legend=1 00:08:53.755 --rc geninfo_all_blocks=1 00:08:53.755 --rc geninfo_unexecuted_blocks=1 00:08:53.755 00:08:53.755 ' 00:08:53.755 02:56:09 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:53.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.755 --rc genhtml_branch_coverage=1 00:08:53.755 --rc genhtml_function_coverage=1 00:08:53.755 --rc genhtml_legend=1 00:08:53.755 --rc geninfo_all_blocks=1 00:08:53.755 --rc geninfo_unexecuted_blocks=1 00:08:53.755 00:08:53.755 ' 00:08:53.755 02:56:09 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:53.755 02:56:09 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:53.756 02:56:09 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:53.756 02:56:09 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:53.756 02:56:09 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:53.756 02:56:09 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.756 02:56:09 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.756 02:56:09 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.756 02:56:09 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.756 02:56:09 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.756 02:56:09 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.756 02:56:09 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.756 02:56:09 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:08:53.756 02:56:09 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.756 02:56:09 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:08:53.756 02:56:09 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:53.756 02:56:09 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:08:53.756 02:56:09 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:53.756 02:56:09 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:08:53.756 02:56:09 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:53.756 02:56:09 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:53.756 02:56:09 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:53.756 02:56:09 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:08:53.756 02:56:09 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.756 02:56:09 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:54.014 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.014 Waiting for block devices as requested 00:08:54.014 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.014 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.272 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.272 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:59.582 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:59.582 02:56:15 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:59.582 02:56:15 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:59.582 02:56:15 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:59.582 02:56:15 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:59.582 02:56:15 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:59.582 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:59.583 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:59.584 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.585 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.586 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.587 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.588 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:59.589 02:56:15 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:59.589 02:56:15 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:59.589 02:56:15 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:59.589 02:56:15 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.589 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.590 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:59.591 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.592 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:59.593 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.594 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:59.595 02:56:15 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:59.595 02:56:15 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:59.595 02:56:15 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:59.595 02:56:15 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.595 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:59.596 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.597 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:59.598 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.599 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.600 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.601 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:08:59.602 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.603 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:59.604 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:59.605 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.606 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:59.607 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:59.608 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:59.609 02:56:15 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:59.609 02:56:15 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:59.609 02:56:15 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:59.609 02:56:15 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:59.609 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:59.610 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:59.611 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:59.612 02:56:15 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:08:59.612 02:56:15 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:08:59.870 02:56:15 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:08:59.870 02:56:15 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:08:59.870 02:56:15 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:08:59.870 02:56:15 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:00.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:00.696 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.696 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.696 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.696 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.696 02:56:16 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:00.696 02:56:16 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:00.696 02:56:16 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.696 02:56:16 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:00.696 ************************************ 00:09:00.696 START TEST nvme_flexible_data_placement 00:09:00.696 ************************************ 00:09:00.696 02:56:16 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:00.956 Initializing NVMe Controllers 00:09:00.956 Attaching to 0000:00:13.0 00:09:00.956 Controller supports FDP Attached to 0000:00:13.0 00:09:00.956 Namespace ID: 1 Endurance Group ID: 1 00:09:00.956 Initialization complete. 00:09:00.956 00:09:00.956 ================================== 00:09:00.956 == FDP tests for Namespace: #01 == 00:09:00.956 ================================== 00:09:00.956 00:09:00.956 Get Feature: FDP: 00:09:00.956 ================= 00:09:00.956 Enabled: Yes 00:09:00.956 FDP configuration Index: 0 00:09:00.956 00:09:00.956 FDP configurations log page 00:09:00.956 =========================== 00:09:00.956 Number of FDP configurations: 1 00:09:00.956 Version: 0 00:09:00.956 Size: 112 00:09:00.956 FDP Configuration Descriptor: 0 00:09:00.956 Descriptor Size: 96 00:09:00.956 Reclaim Group Identifier format: 2 00:09:00.956 FDP Volatile Write Cache: Not Present 00:09:00.956 FDP Configuration: Valid 00:09:00.956 Vendor Specific Size: 0 00:09:00.956 Number of Reclaim Groups: 2 00:09:00.956 Number of Recalim Unit Handles: 8 00:09:00.956 Max Placement Identifiers: 128 00:09:00.956 Number of Namespaces Suppprted: 256 00:09:00.956 Reclaim unit Nominal Size: 6000000 bytes 00:09:00.956 Estimated Reclaim Unit Time Limit: Not Reported 00:09:00.956 RUH Desc #000: RUH Type: Initially Isolated 00:09:00.956 RUH Desc #001: RUH Type: Initially Isolated 00:09:00.956 RUH Desc #002: RUH Type: Initially Isolated 00:09:00.956 RUH Desc #003: RUH Type: Initially Isolated 00:09:00.956 RUH Desc #004: RUH Type: Initially Isolated 00:09:00.956 RUH Desc #005: RUH Type: Initially Isolated 00:09:00.956 RUH Desc #006: RUH Type: Initially Isolated 00:09:00.956 RUH Desc #007: RUH Type: Initially Isolated 00:09:00.956 00:09:00.956 FDP reclaim unit handle usage log page 00:09:00.956 ====================================== 00:09:00.956 Number of Reclaim Unit Handles: 8 00:09:00.956 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:00.956 RUH Usage Desc #001: RUH Attributes: Unused 00:09:00.956 RUH Usage Desc #002: RUH Attributes: Unused 00:09:00.956 RUH Usage Desc #003: RUH Attributes: Unused 00:09:00.956 RUH Usage Desc #004: RUH Attributes: Unused 00:09:00.956 RUH Usage Desc #005: RUH Attributes: Unused 00:09:00.956 RUH Usage Desc #006: RUH Attributes: Unused 00:09:00.956 RUH Usage Desc #007: RUH Attributes: Unused 00:09:00.956 00:09:00.956 FDP statistics log page 00:09:00.956 ======================= 00:09:00.956 Host bytes with metadata written: 1798029312 00:09:00.956 Media bytes with metadata written: 1799036928 00:09:00.956 Media bytes erased: 0 00:09:00.956 00:09:00.956 FDP Reclaim unit handle status 00:09:00.956 ============================== 00:09:00.956 Number of RUHS descriptors: 2 00:09:00.956 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000d44 00:09:00.956 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:00.956 00:09:00.956 FDP write on placement id: 0 success 00:09:00.956 00:09:00.956 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:00.956 00:09:00.956 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:00.956 00:09:00.956 Get Feature: FDP Events for Placement handle: #0 00:09:00.956 ======================== 00:09:00.956 Number of FDP Events: 6 00:09:00.956 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:00.956 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:00.956 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:00.956 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:00.956 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:00.956 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:00.956 00:09:00.956 FDP events log page 00:09:00.956 =================== 00:09:00.956 Number of FDP events: 1 00:09:00.956 FDP Event #0: 00:09:00.956 Event Type: RU Not Written to Capacity 00:09:00.956 Placement Identifier: Valid 00:09:00.956 NSID: Valid 00:09:00.956 Location: Valid 00:09:00.956 Placement Identifier: 0 00:09:00.956 Event Timestamp: 3 00:09:00.956 Namespace Identifier: 1 00:09:00.956 Reclaim Group Identifier: 0 00:09:00.956 Reclaim Unit Handle Identifier: 0 00:09:00.956 00:09:00.956 FDP test passed 00:09:00.956 00:09:00.956 real 0m0.208s 00:09:00.956 user 0m0.056s 00:09:00.956 sys 0m0.051s 00:09:00.956 02:56:16 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.956 ************************************ 00:09:00.956 02:56:16 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:00.956 END TEST nvme_flexible_data_placement 00:09:00.956 ************************************ 00:09:00.956 ************************************ 00:09:00.956 END TEST nvme_fdp 00:09:00.956 ************************************ 00:09:00.956 00:09:00.956 real 0m7.458s 00:09:00.956 user 0m1.030s 00:09:00.956 sys 0m1.359s 00:09:00.956 02:56:16 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.956 02:56:16 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:00.956 02:56:16 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:00.956 02:56:16 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:00.956 02:56:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.956 02:56:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.956 02:56:16 -- common/autotest_common.sh@10 -- # set +x 00:09:00.956 ************************************ 00:09:00.956 START TEST nvme_rpc 00:09:00.956 ************************************ 00:09:00.956 02:56:16 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:00.956 * Looking for test storage... 00:09:00.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:00.956 02:56:16 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:00.956 02:56:16 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:00.956 02:56:16 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:01.216 02:56:16 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.216 02:56:16 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:01.216 02:56:16 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.216 02:56:16 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:01.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.216 --rc genhtml_branch_coverage=1 00:09:01.216 --rc genhtml_function_coverage=1 00:09:01.216 --rc genhtml_legend=1 00:09:01.216 --rc geninfo_all_blocks=1 00:09:01.216 --rc geninfo_unexecuted_blocks=1 00:09:01.216 00:09:01.216 ' 00:09:01.216 02:56:16 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:01.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.216 --rc genhtml_branch_coverage=1 00:09:01.216 --rc genhtml_function_coverage=1 00:09:01.216 --rc genhtml_legend=1 00:09:01.216 --rc geninfo_all_blocks=1 00:09:01.216 --rc geninfo_unexecuted_blocks=1 00:09:01.216 00:09:01.216 ' 00:09:01.216 02:56:16 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:01.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.216 --rc genhtml_branch_coverage=1 00:09:01.216 --rc genhtml_function_coverage=1 00:09:01.216 --rc genhtml_legend=1 00:09:01.216 --rc geninfo_all_blocks=1 00:09:01.216 --rc geninfo_unexecuted_blocks=1 00:09:01.216 00:09:01.216 ' 00:09:01.216 02:56:16 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:01.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.216 --rc genhtml_branch_coverage=1 00:09:01.216 --rc genhtml_function_coverage=1 00:09:01.216 --rc genhtml_legend=1 00:09:01.216 --rc geninfo_all_blocks=1 00:09:01.216 --rc geninfo_unexecuted_blocks=1 00:09:01.216 00:09:01.216 ' 00:09:01.216 02:56:16 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.216 02:56:16 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:01.216 02:56:16 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:01.216 02:56:16 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:01.216 02:56:16 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:01.216 02:56:16 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:01.216 02:56:16 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:01.216 02:56:16 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:01.216 02:56:17 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:01.216 02:56:17 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:01.216 02:56:17 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:01.216 02:56:17 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:01.216 02:56:17 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:01.216 02:56:17 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:01.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.216 02:56:17 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:01.216 02:56:17 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77126 00:09:01.216 02:56:17 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:01.216 02:56:17 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77126 00:09:01.216 02:56:17 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77126 ']' 00:09:01.216 02:56:17 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:01.216 02:56:17 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.216 02:56:17 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.216 02:56:17 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.216 02:56:17 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.216 02:56:17 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.216 [2024-11-29 02:56:17.136438] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:01.216 [2024-11-29 02:56:17.136544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77126 ] 00:09:01.475 [2024-11-29 02:56:17.280495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:01.475 [2024-11-29 02:56:17.299549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.475 [2024-11-29 02:56:17.299589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.041 02:56:17 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.041 02:56:17 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:02.041 02:56:17 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:02.300 Nvme0n1 00:09:02.300 02:56:18 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:02.300 02:56:18 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:02.558 request: 00:09:02.558 { 00:09:02.558 "bdev_name": "Nvme0n1", 00:09:02.558 "filename": "non_existing_file", 00:09:02.558 "method": "bdev_nvme_apply_firmware", 00:09:02.558 "req_id": 1 00:09:02.558 } 00:09:02.558 Got JSON-RPC error response 00:09:02.558 response: 00:09:02.558 { 00:09:02.558 "code": -32603, 00:09:02.558 "message": "open file failed." 00:09:02.558 } 00:09:02.558 02:56:18 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:02.558 02:56:18 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:02.558 02:56:18 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:02.817 02:56:18 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:02.817 02:56:18 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77126 00:09:02.817 02:56:18 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77126 ']' 00:09:02.817 02:56:18 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77126 00:09:02.817 02:56:18 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:02.817 02:56:18 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.817 02:56:18 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77126 00:09:02.817 killing process with pid 77126 00:09:02.817 02:56:18 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.817 02:56:18 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.817 02:56:18 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77126' 00:09:02.817 02:56:18 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77126 00:09:02.817 02:56:18 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77126 00:09:03.075 ************************************ 00:09:03.075 END TEST nvme_rpc 00:09:03.075 ************************************ 00:09:03.075 00:09:03.075 real 0m2.027s 00:09:03.075 user 0m3.985s 00:09:03.075 sys 0m0.468s 00:09:03.075 02:56:18 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.075 02:56:18 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.075 02:56:18 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:03.075 02:56:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.075 02:56:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.075 02:56:18 -- common/autotest_common.sh@10 -- # set +x 00:09:03.075 ************************************ 00:09:03.075 START TEST nvme_rpc_timeouts 00:09:03.075 ************************************ 00:09:03.075 02:56:18 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:03.075 * Looking for test storage... 00:09:03.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:03.075 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:03.075 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:03.075 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:03.335 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.335 02:56:19 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:03.335 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.335 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:03.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.335 --rc genhtml_branch_coverage=1 00:09:03.335 --rc genhtml_function_coverage=1 00:09:03.335 --rc genhtml_legend=1 00:09:03.335 --rc geninfo_all_blocks=1 00:09:03.335 --rc geninfo_unexecuted_blocks=1 00:09:03.335 00:09:03.335 ' 00:09:03.335 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:03.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.335 --rc genhtml_branch_coverage=1 00:09:03.335 --rc genhtml_function_coverage=1 00:09:03.335 --rc genhtml_legend=1 00:09:03.335 --rc geninfo_all_blocks=1 00:09:03.335 --rc geninfo_unexecuted_blocks=1 00:09:03.335 00:09:03.335 ' 00:09:03.335 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:03.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.335 --rc genhtml_branch_coverage=1 00:09:03.335 --rc genhtml_function_coverage=1 00:09:03.335 --rc genhtml_legend=1 00:09:03.335 --rc geninfo_all_blocks=1 00:09:03.335 --rc geninfo_unexecuted_blocks=1 00:09:03.335 00:09:03.335 ' 00:09:03.335 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:03.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.335 --rc genhtml_branch_coverage=1 00:09:03.335 --rc genhtml_function_coverage=1 00:09:03.335 --rc genhtml_legend=1 00:09:03.335 --rc geninfo_all_blocks=1 00:09:03.335 --rc geninfo_unexecuted_blocks=1 00:09:03.335 00:09:03.335 ' 00:09:03.335 02:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:03.335 02:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77180 00:09:03.335 02:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77180 00:09:03.335 02:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77212 00:09:03.335 02:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:03.335 02:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77212 00:09:03.335 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77212 ']' 00:09:03.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.335 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.335 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.335 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.335 02:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:03.335 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.335 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:03.335 [2024-11-29 02:56:19.156458] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:03.335 [2024-11-29 02:56:19.156575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77212 ] 00:09:03.335 [2024-11-29 02:56:19.302568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:03.335 [2024-11-29 02:56:19.321620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.335 [2024-11-29 02:56:19.321702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.269 Checking default timeout settings: 00:09:04.269 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.269 02:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:04.269 02:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:04.269 02:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:04.528 Making settings changes with rpc: 00:09:04.528 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:04.528 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:04.528 Check default vs. modified settings: 00:09:04.528 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:04.528 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77180 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77180 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:05.095 Setting action_on_timeout is changed as expected. 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77180 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77180 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:05.095 Setting timeout_us is changed as expected. 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77180 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77180 00:09:05.095 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:05.096 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:05.096 Setting timeout_admin_us is changed as expected. 00:09:05.096 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:05.096 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:05.096 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:05.096 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:05.096 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77180 /tmp/settings_modified_77180 00:09:05.096 02:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77212 00:09:05.096 02:56:20 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77212 ']' 00:09:05.096 02:56:20 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77212 00:09:05.096 02:56:20 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:05.096 02:56:20 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.096 02:56:20 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77212 00:09:05.096 killing process with pid 77212 00:09:05.096 02:56:20 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.096 02:56:20 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.096 02:56:20 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77212' 00:09:05.096 02:56:20 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77212 00:09:05.096 02:56:20 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77212 00:09:05.354 RPC TIMEOUT SETTING TEST PASSED. 00:09:05.354 02:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:05.354 00:09:05.354 real 0m2.201s 00:09:05.354 user 0m4.431s 00:09:05.354 sys 0m0.447s 00:09:05.354 ************************************ 00:09:05.354 END TEST nvme_rpc_timeouts 00:09:05.354 ************************************ 00:09:05.354 02:56:21 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.354 02:56:21 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:05.354 02:56:21 -- spdk/autotest.sh@239 -- # uname -s 00:09:05.354 02:56:21 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:05.354 02:56:21 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:05.354 02:56:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.354 02:56:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.354 02:56:21 -- common/autotest_common.sh@10 -- # set +x 00:09:05.354 ************************************ 00:09:05.354 START TEST sw_hotplug 00:09:05.354 ************************************ 00:09:05.354 02:56:21 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:05.355 * Looking for test storage... 00:09:05.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:05.355 02:56:21 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:05.355 02:56:21 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:05.355 02:56:21 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:05.355 02:56:21 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.355 02:56:21 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:05.355 02:56:21 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.355 02:56:21 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:05.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.355 --rc genhtml_branch_coverage=1 00:09:05.355 --rc genhtml_function_coverage=1 00:09:05.355 --rc genhtml_legend=1 00:09:05.355 --rc geninfo_all_blocks=1 00:09:05.355 --rc geninfo_unexecuted_blocks=1 00:09:05.355 00:09:05.355 ' 00:09:05.355 02:56:21 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:05.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.355 --rc genhtml_branch_coverage=1 00:09:05.355 --rc genhtml_function_coverage=1 00:09:05.355 --rc genhtml_legend=1 00:09:05.355 --rc geninfo_all_blocks=1 00:09:05.355 --rc geninfo_unexecuted_blocks=1 00:09:05.355 00:09:05.355 ' 00:09:05.355 02:56:21 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:05.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.355 --rc genhtml_branch_coverage=1 00:09:05.355 --rc genhtml_function_coverage=1 00:09:05.355 --rc genhtml_legend=1 00:09:05.355 --rc geninfo_all_blocks=1 00:09:05.355 --rc geninfo_unexecuted_blocks=1 00:09:05.355 00:09:05.355 ' 00:09:05.355 02:56:21 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:05.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.355 --rc genhtml_branch_coverage=1 00:09:05.355 --rc genhtml_function_coverage=1 00:09:05.355 --rc genhtml_legend=1 00:09:05.355 --rc geninfo_all_blocks=1 00:09:05.355 --rc geninfo_unexecuted_blocks=1 00:09:05.355 00:09:05.355 ' 00:09:05.355 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:05.613 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:05.872 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:05.872 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:05.872 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:05.872 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:05.872 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:05.872 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:05.872 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:05.872 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:05.872 02:56:21 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:05.872 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:05.872 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:05.872 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:06.131 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:06.390 Waiting for block devices as requested 00:09:06.390 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:06.390 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:06.390 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:06.648 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:11.932 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:11.932 02:56:27 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:11.932 02:56:27 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:11.932 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:11.932 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:11.932 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:12.191 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:12.450 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:12.450 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:12.450 02:56:28 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:12.450 02:56:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:12.708 02:56:28 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:12.708 02:56:28 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:12.708 02:56:28 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78051 00:09:12.708 02:56:28 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:12.708 02:56:28 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:12.708 02:56:28 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:12.708 02:56:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:12.708 02:56:28 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:12.708 02:56:28 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:12.708 02:56:28 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:12.708 02:56:28 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:12.708 02:56:28 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:12.708 02:56:28 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:12.708 02:56:28 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:12.708 02:56:28 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:12.708 02:56:28 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:12.708 02:56:28 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:12.708 Initializing NVMe Controllers 00:09:12.708 Attaching to 0000:00:10.0 00:09:12.708 Attaching to 0000:00:11.0 00:09:12.708 Attached to 0000:00:10.0 00:09:12.708 Attached to 0000:00:11.0 00:09:12.708 Initialization complete. Starting I/O... 00:09:12.708 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:12.708 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:12.708 00:09:14.081 QEMU NVMe Ctrl (12340 ): 3214 I/Os completed (+3214) 00:09:14.081 QEMU NVMe Ctrl (12341 ): 3117 I/Os completed (+3117) 00:09:14.081 00:09:15.016 QEMU NVMe Ctrl (12340 ): 7308 I/Os completed (+4094) 00:09:15.016 QEMU NVMe Ctrl (12341 ): 7199 I/Os completed (+4082) 00:09:15.016 00:09:15.950 QEMU NVMe Ctrl (12340 ): 11592 I/Os completed (+4284) 00:09:15.950 QEMU NVMe Ctrl (12341 ): 11657 I/Os completed (+4458) 00:09:15.950 00:09:16.884 QEMU NVMe Ctrl (12340 ): 15785 I/Os completed (+4193) 00:09:16.884 QEMU NVMe Ctrl (12341 ): 15933 I/Os completed (+4276) 00:09:16.884 00:09:17.817 QEMU NVMe Ctrl (12340 ): 19850 I/Os completed (+4065) 00:09:17.817 QEMU NVMe Ctrl (12341 ): 19970 I/Os completed (+4037) 00:09:17.817 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:18.752 [2024-11-29 02:56:34.488712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:18.752 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:18.752 [2024-11-29 02:56:34.489716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 [2024-11-29 02:56:34.489751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 [2024-11-29 02:56:34.489766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 [2024-11-29 02:56:34.489780] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:18.752 [2024-11-29 02:56:34.490950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 [2024-11-29 02:56:34.490977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 [2024-11-29 02:56:34.490989] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 [2024-11-29 02:56:34.491003] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:18.752 [2024-11-29 02:56:34.511908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:18.752 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:18.752 [2024-11-29 02:56:34.512969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 [2024-11-29 02:56:34.513096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 [2024-11-29 02:56:34.513119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 [2024-11-29 02:56:34.513134] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:18.752 [2024-11-29 02:56:34.514154] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 [2024-11-29 02:56:34.514183] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 [2024-11-29 02:56:34.514198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 [2024-11-29 02:56:34.514209] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.752 EAL: eal_parse_sysfs_value(): cannot read sysfs value /sys/bus/pci/devices/0000:00:11.0/subsystem_vendor 00:09:18.752 EAL: Scan for (pci) bus failed. 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:18.752 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:18.752 Attaching to 0000:00:10.0 00:09:18.752 Attached to 0000:00:10.0 00:09:18.752 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:19.010 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:19.010 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:19.010 Attaching to 0000:00:11.0 00:09:19.010 Attached to 0000:00:11.0 00:09:19.949 QEMU NVMe Ctrl (12340 ): 3948 I/Os completed (+3948) 00:09:19.949 QEMU NVMe Ctrl (12341 ): 3866 I/Os completed (+3866) 00:09:19.949 00:09:20.884 QEMU NVMe Ctrl (12340 ): 8120 I/Os completed (+4172) 00:09:20.884 QEMU NVMe Ctrl (12341 ): 8300 I/Os completed (+4434) 00:09:20.884 00:09:21.820 QEMU NVMe Ctrl (12340 ): 12188 I/Os completed (+4068) 00:09:21.820 QEMU NVMe Ctrl (12341 ): 12373 I/Os completed (+4073) 00:09:21.820 00:09:22.770 QEMU NVMe Ctrl (12340 ): 16260 I/Os completed (+4072) 00:09:22.770 QEMU NVMe Ctrl (12341 ): 16485 I/Os completed (+4112) 00:09:22.770 00:09:23.703 QEMU NVMe Ctrl (12340 ): 20760 I/Os completed (+4500) 00:09:23.703 QEMU NVMe Ctrl (12341 ): 21275 I/Os completed (+4790) 00:09:23.703 00:09:25.079 QEMU NVMe Ctrl (12340 ): 24867 I/Os completed (+4107) 00:09:25.079 QEMU NVMe Ctrl (12341 ): 25913 I/Os completed (+4638) 00:09:25.079 00:09:26.013 QEMU NVMe Ctrl (12340 ): 29742 I/Os completed (+4875) 00:09:26.013 QEMU NVMe Ctrl (12341 ): 30597 I/Os completed (+4684) 00:09:26.013 00:09:26.946 QEMU NVMe Ctrl (12340 ): 33983 I/Os completed (+4241) 00:09:26.946 QEMU NVMe Ctrl (12341 ): 34752 I/Os completed (+4155) 00:09:26.946 00:09:27.879 QEMU NVMe Ctrl (12340 ): 38145 I/Os completed (+4162) 00:09:27.879 QEMU NVMe Ctrl (12341 ): 39222 I/Os completed (+4470) 00:09:27.879 00:09:28.814 QEMU NVMe Ctrl (12340 ): 42421 I/Os completed (+4276) 00:09:28.814 QEMU NVMe Ctrl (12341 ): 43640 I/Os completed (+4418) 00:09:28.814 00:09:29.761 QEMU NVMe Ctrl (12340 ): 46737 I/Os completed (+4316) 00:09:29.761 QEMU NVMe Ctrl (12341 ): 47959 I/Os completed (+4319) 00:09:29.761 00:09:30.694 QEMU NVMe Ctrl (12340 ): 51103 I/Os completed (+4366) 00:09:30.694 QEMU NVMe Ctrl (12341 ): 52319 I/Os completed (+4360) 00:09:30.694 00:09:30.951 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:30.951 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:30.951 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:30.951 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:30.951 [2024-11-29 02:56:46.752815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:30.951 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:30.951 [2024-11-29 02:56:46.753620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.951 [2024-11-29 02:56:46.753647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.951 [2024-11-29 02:56:46.753660] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.951 [2024-11-29 02:56:46.753675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.951 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:30.951 [2024-11-29 02:56:46.755957] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.951 [2024-11-29 02:56:46.755990] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.951 [2024-11-29 02:56:46.756001] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.951 [2024-11-29 02:56:46.756012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.951 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:09:30.951 EAL: Scan for (pci) bus failed. 00:09:30.951 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:30.951 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:30.952 [2024-11-29 02:56:46.773551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:30.952 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:30.952 [2024-11-29 02:56:46.774268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.952 [2024-11-29 02:56:46.774292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.952 [2024-11-29 02:56:46.774304] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.952 [2024-11-29 02:56:46.774316] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.952 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:30.952 [2024-11-29 02:56:46.775163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.952 [2024-11-29 02:56:46.775187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.952 [2024-11-29 02:56:46.775202] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.952 [2024-11-29 02:56:46.775212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.952 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:30.952 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:30.952 EAL: Scan for (pci) bus failed. 00:09:30.952 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:30.952 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:30.952 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:30.952 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:30.952 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:30.952 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:30.952 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:30.952 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:30.952 Attaching to 0000:00:10.0 00:09:30.952 02:56:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:31.209 Attached to 0000:00:10.0 00:09:31.209 02:56:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:31.210 02:56:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:31.210 02:56:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:31.210 Attaching to 0000:00:11.0 00:09:31.210 Attached to 0000:00:11.0 00:09:31.776 QEMU NVMe Ctrl (12340 ): 2945 I/Os completed (+2945) 00:09:31.776 QEMU NVMe Ctrl (12341 ): 2774 I/Os completed (+2774) 00:09:31.776 00:09:32.710 QEMU NVMe Ctrl (12340 ): 6988 I/Os completed (+4043) 00:09:32.710 QEMU NVMe Ctrl (12341 ): 7211 I/Os completed (+4437) 00:09:32.710 00:09:34.085 QEMU NVMe Ctrl (12340 ): 11006 I/Os completed (+4018) 00:09:34.085 QEMU NVMe Ctrl (12341 ): 11522 I/Os completed (+4311) 00:09:34.085 00:09:35.019 QEMU NVMe Ctrl (12340 ): 15030 I/Os completed (+4024) 00:09:35.019 QEMU NVMe Ctrl (12341 ): 15680 I/Os completed (+4158) 00:09:35.019 00:09:35.953 QEMU NVMe Ctrl (12340 ): 19205 I/Os completed (+4175) 00:09:35.953 QEMU NVMe Ctrl (12341 ): 19856 I/Os completed (+4176) 00:09:35.953 00:09:36.894 QEMU NVMe Ctrl (12340 ): 23136 I/Os completed (+3931) 00:09:36.894 QEMU NVMe Ctrl (12341 ): 23854 I/Os completed (+3998) 00:09:36.894 00:09:37.848 QEMU NVMe Ctrl (12340 ): 26742 I/Os completed (+3606) 00:09:37.848 QEMU NVMe Ctrl (12341 ): 27484 I/Os completed (+3630) 00:09:37.848 00:09:38.786 QEMU NVMe Ctrl (12340 ): 30511 I/Os completed (+3769) 00:09:38.786 QEMU NVMe Ctrl (12341 ): 31265 I/Os completed (+3781) 00:09:38.786 00:09:39.730 QEMU NVMe Ctrl (12340 ): 34275 I/Os completed (+3764) 00:09:39.730 QEMU NVMe Ctrl (12341 ): 35051 I/Os completed (+3786) 00:09:39.730 00:09:40.666 QEMU NVMe Ctrl (12340 ): 38164 I/Os completed (+3889) 00:09:40.666 QEMU NVMe Ctrl (12341 ): 39026 I/Os completed (+3975) 00:09:40.666 00:09:42.039 QEMU NVMe Ctrl (12340 ): 42415 I/Os completed (+4251) 00:09:42.039 QEMU NVMe Ctrl (12341 ): 43257 I/Os completed (+4231) 00:09:42.039 00:09:42.973 QEMU NVMe Ctrl (12340 ): 46655 I/Os completed (+4240) 00:09:42.973 QEMU NVMe Ctrl (12341 ): 47484 I/Os completed (+4227) 00:09:42.973 00:09:43.231 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:43.231 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:43.231 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:43.231 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:43.231 [2024-11-29 02:56:59.024282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:43.231 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:43.231 [2024-11-29 02:56:59.025092] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 [2024-11-29 02:56:59.025120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 [2024-11-29 02:56:59.025131] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 [2024-11-29 02:56:59.025146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:43.231 [2024-11-29 02:56:59.026570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 [2024-11-29 02:56:59.026605] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 [2024-11-29 02:56:59.026616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 [2024-11-29 02:56:59.026631] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:43.231 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:43.231 [2024-11-29 02:56:59.047870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:43.231 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:43.231 [2024-11-29 02:56:59.048600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 [2024-11-29 02:56:59.048626] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 [2024-11-29 02:56:59.048644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 [2024-11-29 02:56:59.048658] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:43.231 [2024-11-29 02:56:59.049513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 [2024-11-29 02:56:59.049541] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 [2024-11-29 02:56:59.049553] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 [2024-11-29 02:56:59.049564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.231 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:43.231 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:43.231 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:43.231 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:43.231 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:43.231 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:43.491 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:43.491 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:43.491 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:43.491 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:43.491 Attaching to 0000:00:10.0 00:09:43.491 Attached to 0000:00:10.0 00:09:43.491 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:43.491 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:43.491 02:56:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:43.491 Attaching to 0000:00:11.0 00:09:43.491 Attached to 0000:00:11.0 00:09:43.491 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:43.491 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:43.491 [2024-11-29 02:56:59.309348] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:09:55.716 02:57:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:55.716 02:57:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:55.716 02:57:11 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.82 00:09:55.716 02:57:11 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.82 00:09:55.716 02:57:11 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:09:55.716 02:57:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.82 00:09:55.716 02:57:11 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.82 2 00:09:55.716 remove_attach_helper took 42.82s to complete (handling 2 nvme drive(s)) 02:57:11 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:02.322 02:57:17 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78051 00:10:02.322 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78051) - No such process 00:10:02.322 02:57:17 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78051 00:10:02.322 02:57:17 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:02.322 02:57:17 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:02.322 02:57:17 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:02.322 02:57:17 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=78604 00:10:02.322 02:57:17 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:02.322 02:57:17 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:02.322 02:57:17 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 78604 00:10:02.322 02:57:17 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 78604 ']' 00:10:02.322 02:57:17 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.322 02:57:17 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.322 02:57:17 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.322 02:57:17 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.322 02:57:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:02.322 [2024-11-29 02:57:17.387319] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:10:02.322 [2024-11-29 02:57:17.387446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78604 ] 00:10:02.322 [2024-11-29 02:57:17.531170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.322 [2024-11-29 02:57:17.550367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.322 02:57:18 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.322 02:57:18 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:02.322 02:57:18 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:02.322 02:57:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.322 02:57:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:02.322 02:57:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.322 02:57:18 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:02.322 02:57:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:02.322 02:57:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:02.322 02:57:18 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:02.322 02:57:18 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:02.322 02:57:18 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:02.322 02:57:18 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:02.322 02:57:18 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:02.322 02:57:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:02.322 02:57:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:02.322 02:57:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:02.322 02:57:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:02.322 02:57:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:08.882 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:08.882 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:08.882 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:08.882 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:08.882 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:08.883 02:57:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.883 02:57:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:08.883 02:57:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:08.883 [2024-11-29 02:57:24.325988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:08.883 [2024-11-29 02:57:24.327097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.883 [2024-11-29 02:57:24.327132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.883 [2024-11-29 02:57:24.327144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:08.883 [2024-11-29 02:57:24.327158] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.883 [2024-11-29 02:57:24.327167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.883 [2024-11-29 02:57:24.327173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:08.883 [2024-11-29 02:57:24.327183] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.883 [2024-11-29 02:57:24.327189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.883 [2024-11-29 02:57:24.327197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:08.883 [2024-11-29 02:57:24.327203] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.883 [2024-11-29 02:57:24.327212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.883 [2024-11-29 02:57:24.327218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:08.883 02:57:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.883 02:57:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:08.883 02:57:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.883 [2024-11-29 02:57:24.825991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:08.883 [2024-11-29 02:57:24.827100] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.883 [2024-11-29 02:57:24.827135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.883 [2024-11-29 02:57:24.827146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:08.883 [2024-11-29 02:57:24.827159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.883 [2024-11-29 02:57:24.827166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.883 [2024-11-29 02:57:24.827175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:08.883 [2024-11-29 02:57:24.827181] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.883 [2024-11-29 02:57:24.827190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.883 [2024-11-29 02:57:24.827196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:08.883 [2024-11-29 02:57:24.827207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.883 [2024-11-29 02:57:24.827213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.883 [2024-11-29 02:57:24.827221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:08.883 02:57:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:09.454 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:09.454 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:09.454 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:09.454 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:09.454 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:09.454 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:09.454 02:57:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.454 02:57:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:09.454 02:57:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.454 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:09.454 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:09.715 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.715 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.715 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:09.715 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:09.715 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.715 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.715 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.715 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:09.715 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:09.715 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.715 02:57:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:21.944 02:57:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.944 02:57:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:21.944 02:57:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:21.944 [2024-11-29 02:57:37.726188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:21.944 [2024-11-29 02:57:37.727422] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.944 [2024-11-29 02:57:37.727459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.944 [2024-11-29 02:57:37.727471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.944 [2024-11-29 02:57:37.727484] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.944 [2024-11-29 02:57:37.727493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.944 [2024-11-29 02:57:37.727500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.944 [2024-11-29 02:57:37.727508] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.944 [2024-11-29 02:57:37.727515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.944 [2024-11-29 02:57:37.727523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.944 [2024-11-29 02:57:37.727530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.944 [2024-11-29 02:57:37.727539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.944 [2024-11-29 02:57:37.727545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:21.944 02:57:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:21.944 02:57:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:21.944 02:57:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:21.944 02:57:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:22.510 [2024-11-29 02:57:38.226197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:22.510 [2024-11-29 02:57:38.227213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.510 [2024-11-29 02:57:38.227247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.510 [2024-11-29 02:57:38.227258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.510 [2024-11-29 02:57:38.227270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.510 [2024-11-29 02:57:38.227278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.510 [2024-11-29 02:57:38.227286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.510 [2024-11-29 02:57:38.227292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.510 [2024-11-29 02:57:38.227305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.510 [2024-11-29 02:57:38.227311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.510 [2024-11-29 02:57:38.227319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.510 [2024-11-29 02:57:38.227325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.510 [2024-11-29 02:57:38.227332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:22.510 02:57:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.510 02:57:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:22.510 02:57:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.510 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:22.768 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:22.768 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.768 02:57:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:34.966 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:34.966 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:34.966 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:34.966 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:34.966 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:34.966 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:34.966 02:57:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.966 02:57:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.967 02:57:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.967 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:34.967 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:34.967 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:34.967 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:34.967 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:34.967 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:34.967 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:34.967 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:34.967 [2024-11-29 02:57:50.626416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:34.967 [2024-11-29 02:57:50.627481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.967 [2024-11-29 02:57:50.627512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.967 [2024-11-29 02:57:50.627526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.967 [2024-11-29 02:57:50.627538] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.967 [2024-11-29 02:57:50.627547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.967 [2024-11-29 02:57:50.627554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.967 [2024-11-29 02:57:50.627562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.967 [2024-11-29 02:57:50.627568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.967 [2024-11-29 02:57:50.627577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.967 [2024-11-29 02:57:50.627583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.967 [2024-11-29 02:57:50.627590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.967 [2024-11-29 02:57:50.627597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.967 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:34.967 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:34.967 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:34.967 02:57:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.967 02:57:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.967 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:34.967 02:57:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.967 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:34.967 02:57:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:35.225 [2024-11-29 02:57:51.026421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:35.225 [2024-11-29 02:57:51.027420] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.225 [2024-11-29 02:57:51.027452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.225 [2024-11-29 02:57:51.027462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.225 [2024-11-29 02:57:51.027473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.225 [2024-11-29 02:57:51.027480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.225 [2024-11-29 02:57:51.027490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.225 [2024-11-29 02:57:51.027497] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.225 [2024-11-29 02:57:51.027505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.225 [2024-11-29 02:57:51.027511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.225 [2024-11-29 02:57:51.027521] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.225 [2024-11-29 02:57:51.027527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.225 [2024-11-29 02:57:51.027535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.225 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:35.225 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:35.225 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:35.225 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:35.225 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:35.225 02:57:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.225 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:35.225 02:57:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:35.225 02:57:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.225 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:35.225 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:35.484 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:35.484 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:35.484 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:35.484 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:35.484 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:35.484 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:35.484 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:35.484 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:35.484 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:35.484 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:35.484 02:57:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:47.684 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:47.684 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:47.684 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:47.684 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.684 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.684 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.684 02:58:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.684 02:58:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.684 02:58:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.684 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:47.684 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:47.684 02:58:03 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.22 00:10:47.684 02:58:03 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.22 00:10:47.684 02:58:03 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:47.684 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.22 00:10:47.684 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.22 2 00:10:47.684 remove_attach_helper took 45.22s to complete (handling 2 nvme drive(s)) 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:10:47.684 02:58:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.684 02:58:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.684 02:58:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.685 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:47.685 02:58:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.685 02:58:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.685 02:58:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.685 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:10:47.685 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:47.685 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:47.685 02:58:03 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:47.685 02:58:03 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:47.685 02:58:03 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:47.685 02:58:03 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:47.685 02:58:03 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:47.685 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:47.685 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:47.685 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:47.685 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:47.685 02:58:03 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:54.237 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:54.237 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:54.237 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:54.237 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:54.237 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:54.237 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:54.237 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:54.237 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:54.237 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.237 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.237 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.237 02:58:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.237 02:58:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.237 02:58:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.237 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:54.237 02:58:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:54.237 [2024-11-29 02:58:09.580112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:54.237 [2024-11-29 02:58:09.580911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.237 [2024-11-29 02:58:09.580936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.237 [2024-11-29 02:58:09.580948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.237 [2024-11-29 02:58:09.580960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.237 [2024-11-29 02:58:09.580968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.237 [2024-11-29 02:58:09.580975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.237 [2024-11-29 02:58:09.580982] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.237 [2024-11-29 02:58:09.580989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.237 [2024-11-29 02:58:09.580999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.237 [2024-11-29 02:58:09.581005] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.237 [2024-11-29 02:58:09.581013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.237 [2024-11-29 02:58:09.581019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.237 [2024-11-29 02:58:09.980112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:54.237 [2024-11-29 02:58:09.980852] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.237 [2024-11-29 02:58:09.980873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.237 [2024-11-29 02:58:09.980882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.237 [2024-11-29 02:58:09.980893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.237 [2024-11-29 02:58:09.980900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.237 [2024-11-29 02:58:09.980908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.237 [2024-11-29 02:58:09.980915] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.237 [2024-11-29 02:58:09.980922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.237 [2024-11-29 02:58:09.980929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.237 [2024-11-29 02:58:09.980937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.238 [2024-11-29 02:58:09.980943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.238 [2024-11-29 02:58:09.980953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.238 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:54.238 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:54.238 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:54.238 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.238 02:58:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.238 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.238 02:58:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.238 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.238 02:58:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.238 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:54.238 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:54.238 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.238 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.238 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:54.496 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:54.496 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.496 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.496 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.496 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:54.496 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:54.496 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.496 02:58:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.712 02:58:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.712 02:58:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.712 02:58:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.712 02:58:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.712 02:58:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.712 02:58:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:06.712 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:06.712 [2024-11-29 02:58:22.480315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:06.712 [2024-11-29 02:58:22.481097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.712 [2024-11-29 02:58:22.481123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.712 [2024-11-29 02:58:22.481135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.712 [2024-11-29 02:58:22.481146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.712 [2024-11-29 02:58:22.481155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.712 [2024-11-29 02:58:22.481162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.712 [2024-11-29 02:58:22.481170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.712 [2024-11-29 02:58:22.481176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.712 [2024-11-29 02:58:22.481184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.712 [2024-11-29 02:58:22.481190] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.712 [2024-11-29 02:58:22.481197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.712 [2024-11-29 02:58:22.481204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.971 [2024-11-29 02:58:22.880317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:06.971 [2024-11-29 02:58:22.881065] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.971 [2024-11-29 02:58:22.881099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.971 [2024-11-29 02:58:22.881108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.971 [2024-11-29 02:58:22.881119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.971 [2024-11-29 02:58:22.881126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.971 [2024-11-29 02:58:22.881134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.971 [2024-11-29 02:58:22.881140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.971 [2024-11-29 02:58:22.881148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.971 [2024-11-29 02:58:22.881155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.971 [2024-11-29 02:58:22.881163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.971 [2024-11-29 02:58:22.881169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.971 [2024-11-29 02:58:22.881176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.971 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:06.971 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.971 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.971 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.971 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.971 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.971 02:58:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.971 02:58:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.229 02:58:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.229 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:07.229 02:58:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:07.229 02:58:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.229 02:58:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.229 02:58:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:07.229 02:58:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:07.229 02:58:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.229 02:58:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.229 02:58:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.229 02:58:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:07.229 02:58:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:07.488 02:58:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.488 02:58:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.684 02:58:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.684 02:58:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.684 02:58:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.684 [2024-11-29 02:58:35.280539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:19.684 [2024-11-29 02:58:35.281463] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.684 [2024-11-29 02:58:35.281497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.684 [2024-11-29 02:58:35.281510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.684 [2024-11-29 02:58:35.281522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.684 [2024-11-29 02:58:35.281532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.684 [2024-11-29 02:58:35.281539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.684 [2024-11-29 02:58:35.281547] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.684 [2024-11-29 02:58:35.281554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.684 [2024-11-29 02:58:35.281563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.684 [2024-11-29 02:58:35.281570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.684 [2024-11-29 02:58:35.281578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.684 [2024-11-29 02:58:35.281584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.684 02:58:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.684 02:58:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.684 02:58:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:19.684 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:19.942 [2024-11-29 02:58:35.680544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:19.942 [2024-11-29 02:58:35.681322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.942 [2024-11-29 02:58:35.681356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.942 [2024-11-29 02:58:35.681366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.942 [2024-11-29 02:58:35.681377] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.942 [2024-11-29 02:58:35.681384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.942 [2024-11-29 02:58:35.681392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.942 [2024-11-29 02:58:35.681399] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.942 [2024-11-29 02:58:35.681408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.942 [2024-11-29 02:58:35.681415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.942 [2024-11-29 02:58:35.681423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.942 [2024-11-29 02:58:35.681429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.942 [2024-11-29 02:58:35.681437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.942 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:19.942 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.942 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.942 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.942 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.942 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.942 02:58:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.942 02:58:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.942 02:58:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.942 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:19.942 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:20.201 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:20.201 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:20.201 02:58:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:20.201 02:58:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:20.201 02:58:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:20.201 02:58:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:20.201 02:58:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:20.201 02:58:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:20.201 02:58:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:20.201 02:58:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:20.201 02:58:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:32.433 02:58:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:32.433 02:58:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:32.433 02:58:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:32.433 02:58:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.433 02:58:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.433 02:58:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.433 02:58:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:32.433 02:58:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.65 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.65 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:32.433 02:58:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.65 00:11:32.433 02:58:48 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.65 2 00:11:32.433 remove_attach_helper took 44.65s to complete (handling 2 nvme drive(s)) 02:58:48 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:32.433 02:58:48 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 78604 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 78604 ']' 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 78604 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78604 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.433 killing process with pid 78604 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78604' 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@973 -- # kill 78604 00:11:32.433 02:58:48 sw_hotplug -- common/autotest_common.sh@978 -- # wait 78604 00:11:32.695 02:58:48 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:32.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.220 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:33.220 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:33.481 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.481 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.481 00:11:33.481 real 2m28.187s 00:11:33.481 user 1m48.894s 00:11:33.481 sys 0m17.961s 00:11:33.481 02:58:49 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.481 ************************************ 00:11:33.481 END TEST sw_hotplug 00:11:33.481 ************************************ 00:11:33.481 02:58:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:33.481 02:58:49 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:33.481 02:58:49 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:33.481 02:58:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:33.481 02:58:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.481 02:58:49 -- common/autotest_common.sh@10 -- # set +x 00:11:33.481 ************************************ 00:11:33.481 START TEST nvme_xnvme 00:11:33.481 ************************************ 00:11:33.481 02:58:49 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:33.745 * Looking for test storage... 00:11:33.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.745 02:58:49 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:33.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.745 --rc genhtml_branch_coverage=1 00:11:33.745 --rc genhtml_function_coverage=1 00:11:33.745 --rc genhtml_legend=1 00:11:33.745 --rc geninfo_all_blocks=1 00:11:33.745 --rc geninfo_unexecuted_blocks=1 00:11:33.745 00:11:33.745 ' 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:33.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.745 --rc genhtml_branch_coverage=1 00:11:33.745 --rc genhtml_function_coverage=1 00:11:33.745 --rc genhtml_legend=1 00:11:33.745 --rc geninfo_all_blocks=1 00:11:33.745 --rc geninfo_unexecuted_blocks=1 00:11:33.745 00:11:33.745 ' 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:33.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.745 --rc genhtml_branch_coverage=1 00:11:33.745 --rc genhtml_function_coverage=1 00:11:33.745 --rc genhtml_legend=1 00:11:33.745 --rc geninfo_all_blocks=1 00:11:33.745 --rc geninfo_unexecuted_blocks=1 00:11:33.745 00:11:33.745 ' 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:33.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.745 --rc genhtml_branch_coverage=1 00:11:33.745 --rc genhtml_function_coverage=1 00:11:33.745 --rc genhtml_legend=1 00:11:33.745 --rc geninfo_all_blocks=1 00:11:33.745 --rc geninfo_unexecuted_blocks=1 00:11:33.745 00:11:33.745 ' 00:11:33.745 02:58:49 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:11:33.745 02:58:49 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:33.745 02:58:49 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:33.745 02:58:49 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:33.746 02:58:49 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:33.746 02:58:49 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:33.746 #define SPDK_CONFIG_H 00:11:33.746 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:33.746 #define SPDK_CONFIG_APPS 1 00:11:33.746 #define SPDK_CONFIG_ARCH native 00:11:33.746 #define SPDK_CONFIG_ASAN 1 00:11:33.746 #undef SPDK_CONFIG_AVAHI 00:11:33.746 #undef SPDK_CONFIG_CET 00:11:33.746 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:33.746 #define SPDK_CONFIG_COVERAGE 1 00:11:33.746 #define SPDK_CONFIG_CROSS_PREFIX 00:11:33.746 #undef SPDK_CONFIG_CRYPTO 00:11:33.746 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:33.746 #undef SPDK_CONFIG_CUSTOMOCF 00:11:33.746 #undef SPDK_CONFIG_DAOS 00:11:33.746 #define SPDK_CONFIG_DAOS_DIR 00:11:33.746 #define SPDK_CONFIG_DEBUG 1 00:11:33.746 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:33.746 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:11:33.746 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:11:33.746 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:11:33.746 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:33.746 #undef SPDK_CONFIG_DPDK_UADK 00:11:33.746 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:33.746 #define SPDK_CONFIG_EXAMPLES 1 00:11:33.746 #undef SPDK_CONFIG_FC 00:11:33.746 #define SPDK_CONFIG_FC_PATH 00:11:33.746 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:33.746 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:33.746 #define SPDK_CONFIG_FSDEV 1 00:11:33.746 #undef SPDK_CONFIG_FUSE 00:11:33.746 #undef SPDK_CONFIG_FUZZER 00:11:33.746 #define SPDK_CONFIG_FUZZER_LIB 00:11:33.746 #undef SPDK_CONFIG_GOLANG 00:11:33.746 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:33.746 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:33.746 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:33.746 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:33.746 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:33.746 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:33.746 #undef SPDK_CONFIG_HAVE_LZ4 00:11:33.746 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:33.746 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:33.746 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:33.746 #define SPDK_CONFIG_IDXD 1 00:11:33.746 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:33.746 #undef SPDK_CONFIG_IPSEC_MB 00:11:33.746 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:33.746 #define SPDK_CONFIG_ISAL 1 00:11:33.746 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:33.746 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:33.746 #define SPDK_CONFIG_LIBDIR 00:11:33.746 #undef SPDK_CONFIG_LTO 00:11:33.746 #define SPDK_CONFIG_MAX_LCORES 128 00:11:33.746 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:33.746 #define SPDK_CONFIG_NVME_CUSE 1 00:11:33.746 #undef SPDK_CONFIG_OCF 00:11:33.746 #define SPDK_CONFIG_OCF_PATH 00:11:33.746 #define SPDK_CONFIG_OPENSSL_PATH 00:11:33.746 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:33.746 #define SPDK_CONFIG_PGO_DIR 00:11:33.746 #undef SPDK_CONFIG_PGO_USE 00:11:33.746 #define SPDK_CONFIG_PREFIX /usr/local 00:11:33.746 #undef SPDK_CONFIG_RAID5F 00:11:33.746 #undef SPDK_CONFIG_RBD 00:11:33.746 #define SPDK_CONFIG_RDMA 1 00:11:33.746 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:33.746 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:33.746 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:33.746 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:33.746 #define SPDK_CONFIG_SHARED 1 00:11:33.746 #undef SPDK_CONFIG_SMA 00:11:33.746 #define SPDK_CONFIG_TESTS 1 00:11:33.746 #undef SPDK_CONFIG_TSAN 00:11:33.746 #define SPDK_CONFIG_UBLK 1 00:11:33.746 #define SPDK_CONFIG_UBSAN 1 00:11:33.746 #undef SPDK_CONFIG_UNIT_TESTS 00:11:33.746 #undef SPDK_CONFIG_URING 00:11:33.746 #define SPDK_CONFIG_URING_PATH 00:11:33.746 #undef SPDK_CONFIG_URING_ZNS 00:11:33.746 #undef SPDK_CONFIG_USDT 00:11:33.746 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:33.746 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:33.746 #undef SPDK_CONFIG_VFIO_USER 00:11:33.746 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:33.746 #define SPDK_CONFIG_VHOST 1 00:11:33.746 #define SPDK_CONFIG_VIRTIO 1 00:11:33.746 #undef SPDK_CONFIG_VTUNE 00:11:33.746 #define SPDK_CONFIG_VTUNE_DIR 00:11:33.746 #define SPDK_CONFIG_WERROR 1 00:11:33.746 #define SPDK_CONFIG_WPDK_DIR 00:11:33.746 #define SPDK_CONFIG_XNVME 1 00:11:33.746 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:33.746 02:58:49 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:33.746 02:58:49 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:33.746 02:58:49 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.747 02:58:49 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.747 02:58:49 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.747 02:58:49 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.747 02:58:49 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.747 02:58:49 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.747 02:58:49 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.747 02:58:49 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:33.747 02:58:49 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@68 -- # uname -s 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:33.747 02:58:49 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:11:33.747 02:58:49 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 79942 ]] 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 79942 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.gkbRlv 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.gkbRlv/tests/xnvme /tmp/spdk.gkbRlv 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:11:33.748 02:58:49 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13343916032 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6238617600 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261964800 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13343916032 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6238617600 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265237504 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=155648 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97187536896 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2515243008 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:33.749 * Looking for test storage... 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13343916032 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:33.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:33.749 02:58:49 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:34.009 02:58:49 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.009 02:58:49 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:34.009 02:58:49 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.009 02:58:49 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:34.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.010 --rc genhtml_branch_coverage=1 00:11:34.010 --rc genhtml_function_coverage=1 00:11:34.010 --rc genhtml_legend=1 00:11:34.010 --rc geninfo_all_blocks=1 00:11:34.010 --rc geninfo_unexecuted_blocks=1 00:11:34.010 00:11:34.010 ' 00:11:34.010 02:58:49 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:34.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.010 --rc genhtml_branch_coverage=1 00:11:34.010 --rc genhtml_function_coverage=1 00:11:34.010 --rc genhtml_legend=1 00:11:34.010 --rc geninfo_all_blocks=1 00:11:34.010 --rc geninfo_unexecuted_blocks=1 00:11:34.010 00:11:34.010 ' 00:11:34.010 02:58:49 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:34.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.010 --rc genhtml_branch_coverage=1 00:11:34.010 --rc genhtml_function_coverage=1 00:11:34.010 --rc genhtml_legend=1 00:11:34.010 --rc geninfo_all_blocks=1 00:11:34.010 --rc geninfo_unexecuted_blocks=1 00:11:34.010 00:11:34.010 ' 00:11:34.010 02:58:49 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:34.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.010 --rc genhtml_branch_coverage=1 00:11:34.010 --rc genhtml_function_coverage=1 00:11:34.010 --rc genhtml_legend=1 00:11:34.010 --rc geninfo_all_blocks=1 00:11:34.010 --rc geninfo_unexecuted_blocks=1 00:11:34.010 00:11:34.010 ' 00:11:34.010 02:58:49 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:34.010 02:58:49 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.010 02:58:49 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.010 02:58:49 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.010 02:58:49 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.010 02:58:49 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.010 02:58:49 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.010 02:58:49 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.010 02:58:49 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:34.010 02:58:49 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:11:34.010 02:58:49 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:34.270 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:34.270 Waiting for block devices as requested 00:11:34.531 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.531 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.531 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.792 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.084 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:40.084 02:58:55 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:11:40.084 02:58:55 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:11:40.084 02:58:55 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:11:40.345 02:58:56 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:11:40.345 02:58:56 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:11:40.346 02:58:56 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:11:40.346 02:58:56 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:11:40.346 02:58:56 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:40.346 No valid GPT data, bailing 00:11:40.346 02:58:56 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:40.346 02:58:56 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:11:40.346 02:58:56 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:11:40.346 02:58:56 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:11:40.346 02:58:56 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:11:40.346 02:58:56 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:11:40.346 02:58:56 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:11:40.346 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:11:40.346 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:11:40.346 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:40.346 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:11:40.346 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:11:40.346 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:11:40.346 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:11:40.346 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:11:40.346 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:11:40.346 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:11:40.346 02:58:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:40.346 02:58:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.346 02:58:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:40.346 ************************************ 00:11:40.346 START TEST xnvme_rpc 00:11:40.346 ************************************ 00:11:40.346 02:58:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:11:40.346 02:58:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:11:40.346 02:58:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:11:40.346 02:58:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:11:40.346 02:58:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:11:40.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.346 02:58:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80339 00:11:40.346 02:58:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80339 00:11:40.346 02:58:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80339 ']' 00:11:40.346 02:58:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.346 02:58:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.346 02:58:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.346 02:58:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.346 02:58:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.346 02:58:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:40.346 [2024-11-29 02:58:56.292728] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:11:40.346 [2024-11-29 02:58:56.293096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80339 ] 00:11:40.606 [2024-11-29 02:58:56.442642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.606 [2024-11-29 02:58:56.471039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.177 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.177 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:41.177 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:11:41.177 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.177 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.177 xnvme_bdev 00:11:41.177 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.437 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:11:41.437 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:41.437 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:11:41.437 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80339 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80339 ']' 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80339 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80339 00:11:41.438 killing process with pid 80339 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80339' 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80339 00:11:41.438 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80339 00:11:41.698 ************************************ 00:11:41.699 END TEST xnvme_rpc 00:11:41.699 ************************************ 00:11:41.699 00:11:41.699 real 0m1.462s 00:11:41.699 user 0m1.502s 00:11:41.699 sys 0m0.428s 00:11:41.699 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.699 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.959 02:58:57 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:41.959 02:58:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:41.959 02:58:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.959 02:58:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:41.959 ************************************ 00:11:41.959 START TEST xnvme_bdevperf 00:11:41.959 ************************************ 00:11:41.959 02:58:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:11:41.959 02:58:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:11:41.959 02:58:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:11:41.959 02:58:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:11:41.959 02:58:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:11:41.959 02:58:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:11:41.959 02:58:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:41.959 02:58:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:41.959 { 00:11:41.959 "subsystems": [ 00:11:41.959 { 00:11:41.959 "subsystem": "bdev", 00:11:41.959 "config": [ 00:11:41.959 { 00:11:41.959 "params": { 00:11:41.959 "io_mechanism": "libaio", 00:11:41.959 "conserve_cpu": false, 00:11:41.959 "filename": "/dev/nvme0n1", 00:11:41.959 "name": "xnvme_bdev" 00:11:41.959 }, 00:11:41.959 "method": "bdev_xnvme_create" 00:11:41.959 }, 00:11:41.959 { 00:11:41.959 "method": "bdev_wait_for_examine" 00:11:41.959 } 00:11:41.959 ] 00:11:41.959 } 00:11:41.959 ] 00:11:41.959 } 00:11:41.959 [2024-11-29 02:58:57.812194] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:11:41.959 [2024-11-29 02:58:57.812324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80394 ] 00:11:42.220 [2024-11-29 02:58:57.959863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.220 [2024-11-29 02:58:57.989333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.220 Running I/O for 5 seconds... 00:11:44.544 25610.00 IOPS, 100.04 MiB/s [2024-11-29T02:59:01.477Z] 25265.50 IOPS, 98.69 MiB/s [2024-11-29T02:59:02.423Z] 24657.67 IOPS, 96.32 MiB/s [2024-11-29T02:59:03.367Z] 24966.50 IOPS, 97.53 MiB/s 00:11:47.375 Latency(us) 00:11:47.375 [2024-11-29T02:59:03.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.375 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:47.375 xnvme_bdev : 5.01 25157.27 98.27 0.00 0.00 2538.66 363.91 10183.29 00:11:47.375 [2024-11-29T02:59:03.367Z] =================================================================================================================== 00:11:47.375 [2024-11-29T02:59:03.367Z] Total : 25157.27 98.27 0.00 0.00 2538.66 363.91 10183.29 00:11:47.375 02:59:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:11:47.375 02:59:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:11:47.375 02:59:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:11:47.375 02:59:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:47.375 02:59:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:47.375 { 00:11:47.375 "subsystems": [ 00:11:47.375 { 00:11:47.375 "subsystem": "bdev", 00:11:47.375 "config": [ 00:11:47.375 { 00:11:47.375 "params": { 00:11:47.375 "io_mechanism": "libaio", 00:11:47.375 "conserve_cpu": false, 00:11:47.375 "filename": "/dev/nvme0n1", 00:11:47.375 "name": "xnvme_bdev" 00:11:47.375 }, 00:11:47.375 "method": "bdev_xnvme_create" 00:11:47.375 }, 00:11:47.375 { 00:11:47.375 "method": "bdev_wait_for_examine" 00:11:47.375 } 00:11:47.375 ] 00:11:47.375 } 00:11:47.375 ] 00:11:47.375 } 00:11:47.636 [2024-11-29 02:59:03.383535] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:11:47.636 [2024-11-29 02:59:03.383675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80466 ] 00:11:47.636 [2024-11-29 02:59:03.523305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.636 [2024-11-29 02:59:03.551551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.897 Running I/O for 5 seconds... 00:11:49.785 34005.00 IOPS, 132.83 MiB/s [2024-11-29T02:59:06.720Z] 33431.50 IOPS, 130.59 MiB/s [2024-11-29T02:59:08.108Z] 32347.33 IOPS, 126.36 MiB/s [2024-11-29T02:59:08.678Z] 32540.75 IOPS, 127.11 MiB/s 00:11:52.686 Latency(us) 00:11:52.686 [2024-11-29T02:59:08.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:52.686 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:11:52.686 xnvme_bdev : 5.00 32958.39 128.74 0.00 0.00 1937.25 441.11 8620.50 00:11:52.686 [2024-11-29T02:59:08.678Z] =================================================================================================================== 00:11:52.686 [2024-11-29T02:59:08.678Z] Total : 32958.39 128.74 0.00 0.00 1937.25 441.11 8620.50 00:11:52.946 00:11:52.946 real 0m11.116s 00:11:52.946 user 0m3.190s 00:11:52.946 sys 0m6.417s 00:11:52.946 02:59:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.946 ************************************ 00:11:52.946 END TEST xnvme_bdevperf 00:11:52.946 ************************************ 00:11:52.946 02:59:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:52.946 02:59:08 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:11:52.946 02:59:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:52.946 02:59:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.946 02:59:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:52.946 ************************************ 00:11:52.946 START TEST xnvme_fio_plugin 00:11:52.946 ************************************ 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:52.946 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:53.207 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:53.207 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:53.207 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:11:53.207 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:53.207 02:59:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:11:53.207 { 00:11:53.207 "subsystems": [ 00:11:53.207 { 00:11:53.207 "subsystem": "bdev", 00:11:53.207 "config": [ 00:11:53.207 { 00:11:53.207 "params": { 00:11:53.207 "io_mechanism": "libaio", 00:11:53.207 "conserve_cpu": false, 00:11:53.207 "filename": "/dev/nvme0n1", 00:11:53.207 "name": "xnvme_bdev" 00:11:53.207 }, 00:11:53.207 "method": "bdev_xnvme_create" 00:11:53.207 }, 00:11:53.207 { 00:11:53.207 "method": "bdev_wait_for_examine" 00:11:53.207 } 00:11:53.207 ] 00:11:53.207 } 00:11:53.207 ] 00:11:53.207 } 00:11:53.207 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:11:53.207 fio-3.35 00:11:53.207 Starting 1 thread 00:11:58.581 00:11:58.581 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80574: Fri Nov 29 02:59:14 2024 00:11:58.581 read: IOPS=33.2k, BW=130MiB/s (136MB/s)(649MiB/5001msec) 00:11:58.581 slat (usec): min=4, max=2106, avg=21.71, stdev=95.61 00:11:58.581 clat (usec): min=107, max=5724, avg=1345.50, stdev=562.41 00:11:58.581 lat (usec): min=193, max=5730, avg=1367.21, stdev=554.67 00:11:58.581 clat percentiles (usec): 00:11:58.581 | 1.00th=[ 277], 5.00th=[ 490], 10.00th=[ 652], 20.00th=[ 865], 00:11:58.581 | 30.00th=[ 1029], 40.00th=[ 1172], 50.00th=[ 1319], 60.00th=[ 1467], 00:11:58.581 | 70.00th=[ 1598], 80.00th=[ 1778], 90.00th=[ 2040], 95.00th=[ 2278], 00:11:58.581 | 99.00th=[ 2966], 99.50th=[ 3294], 99.90th=[ 4080], 99.95th=[ 4228], 00:11:58.581 | 99.99th=[ 4621] 00:11:58.581 bw ( KiB/s): min=123032, max=142504, per=99.65%, avg=132378.67, stdev=5921.10, samples=9 00:11:58.581 iops : min=30758, max=35626, avg=33094.67, stdev=1480.27, samples=9 00:11:58.581 lat (usec) : 250=0.70%, 500=4.52%, 750=9.07%, 1000=13.72% 00:11:58.581 lat (msec) : 2=61.09%, 4=10.78%, 10=0.12% 00:11:58.581 cpu : usr=41.88%, sys=49.18%, ctx=18, majf=0, minf=1065 00:11:58.581 IO depths : 1=0.5%, 2=1.2%, 4=3.0%, 8=8.4%, 16=23.3%, 32=61.4%, >=64=2.1% 00:11:58.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.581 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:11:58.581 issued rwts: total=166095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.581 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:58.581 00:11:58.581 Run status group 0 (all jobs): 00:11:58.581 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=649MiB (680MB), run=5001-5001msec 00:11:59.155 ----------------------------------------------------- 00:11:59.155 Suppressions used: 00:11:59.155 count bytes template 00:11:59.155 1 11 /usr/src/fio/parse.c 00:11:59.155 1 8 libtcmalloc_minimal.so 00:11:59.155 1 904 libcrypto.so 00:11:59.155 ----------------------------------------------------- 00:11:59.155 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:11:59.155 02:59:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:59.155 02:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:59.155 02:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:59.155 02:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:11:59.155 02:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:59.156 02:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:11:59.156 { 00:11:59.156 "subsystems": [ 00:11:59.156 { 00:11:59.156 "subsystem": "bdev", 00:11:59.156 "config": [ 00:11:59.156 { 00:11:59.156 "params": { 00:11:59.156 "io_mechanism": "libaio", 00:11:59.156 "conserve_cpu": false, 00:11:59.156 "filename": "/dev/nvme0n1", 00:11:59.156 "name": "xnvme_bdev" 00:11:59.156 }, 00:11:59.156 "method": "bdev_xnvme_create" 00:11:59.156 }, 00:11:59.156 { 00:11:59.156 "method": "bdev_wait_for_examine" 00:11:59.156 } 00:11:59.156 ] 00:11:59.156 } 00:11:59.156 ] 00:11:59.156 } 00:11:59.417 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:11:59.417 fio-3.35 00:11:59.417 Starting 1 thread 00:12:04.745 00:12:04.745 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80655: Fri Nov 29 02:59:20 2024 00:12:04.745 write: IOPS=36.7k, BW=143MiB/s (150MB/s)(718MiB/5001msec); 0 zone resets 00:12:04.745 slat (usec): min=4, max=1888, avg=20.70, stdev=77.59 00:12:04.745 clat (usec): min=107, max=5081, avg=1175.74, stdev=552.13 00:12:04.745 lat (usec): min=185, max=5154, avg=1196.45, stdev=548.11 00:12:04.745 clat percentiles (usec): 00:12:04.745 | 1.00th=[ 262], 5.00th=[ 412], 10.00th=[ 537], 20.00th=[ 717], 00:12:04.745 | 30.00th=[ 865], 40.00th=[ 988], 50.00th=[ 1106], 60.00th=[ 1237], 00:12:04.745 | 70.00th=[ 1369], 80.00th=[ 1565], 90.00th=[ 1876], 95.00th=[ 2180], 00:12:04.745 | 99.00th=[ 2933], 99.50th=[ 3261], 99.90th=[ 3949], 99.95th=[ 4146], 00:12:04.745 | 99.99th=[ 4621] 00:12:04.745 bw ( KiB/s): min=131024, max=166576, per=99.20%, avg=145760.00, stdev=9723.94, samples=9 00:12:04.745 iops : min=32756, max=41644, avg=36440.22, stdev=2431.39, samples=9 00:12:04.745 lat (usec) : 250=0.85%, 500=7.49%, 750=13.70%, 1000=18.96% 00:12:04.745 lat (msec) : 2=51.60%, 4=7.32%, 10=0.08% 00:12:04.745 cpu : usr=37.92%, sys=50.84%, ctx=13, majf=0, minf=1066 00:12:04.745 IO depths : 1=0.3%, 2=0.9%, 4=2.7%, 8=8.3%, 16=23.9%, 32=61.8%, >=64=2.1% 00:12:04.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.745 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:04.745 issued rwts: total=0,183712,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.745 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:04.745 00:12:04.745 Run status group 0 (all jobs): 00:12:04.745 WRITE: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=718MiB (752MB), run=5001-5001msec 00:12:05.318 ----------------------------------------------------- 00:12:05.318 Suppressions used: 00:12:05.318 count bytes template 00:12:05.318 1 11 /usr/src/fio/parse.c 00:12:05.318 1 8 libtcmalloc_minimal.so 00:12:05.318 1 904 libcrypto.so 00:12:05.318 ----------------------------------------------------- 00:12:05.318 00:12:05.318 00:12:05.318 real 0m12.104s 00:12:05.318 user 0m5.139s 00:12:05.318 sys 0m5.555s 00:12:05.318 02:59:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.318 ************************************ 00:12:05.318 END TEST xnvme_fio_plugin 00:12:05.318 ************************************ 00:12:05.318 02:59:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:05.318 02:59:21 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:05.318 02:59:21 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:05.318 02:59:21 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:05.318 02:59:21 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:05.318 02:59:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:05.318 02:59:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.318 02:59:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:05.318 ************************************ 00:12:05.318 START TEST xnvme_rpc 00:12:05.318 ************************************ 00:12:05.318 02:59:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:05.318 02:59:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:05.318 02:59:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:05.318 02:59:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:05.318 02:59:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:05.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.318 02:59:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80734 00:12:05.318 02:59:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80734 00:12:05.318 02:59:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80734 ']' 00:12:05.318 02:59:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.318 02:59:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.318 02:59:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:05.318 02:59:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.318 02:59:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.318 02:59:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.318 [2024-11-29 02:59:21.186463] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:05.318 [2024-11-29 02:59:21.186603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80734 ] 00:12:05.580 [2024-11-29 02:59:21.335251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.580 [2024-11-29 02:59:21.364140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.153 xnvme_bdev 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.153 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80734 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80734 ']' 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80734 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80734 00:12:06.414 killing process with pid 80734 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80734' 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80734 00:12:06.414 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80734 00:12:06.675 00:12:06.675 real 0m1.456s 00:12:06.675 user 0m1.512s 00:12:06.675 sys 0m0.405s 00:12:06.675 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.675 02:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.675 ************************************ 00:12:06.675 END TEST xnvme_rpc 00:12:06.675 ************************************ 00:12:06.675 02:59:22 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:06.675 02:59:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:06.675 02:59:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.675 02:59:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:06.675 ************************************ 00:12:06.675 START TEST xnvme_bdevperf 00:12:06.675 ************************************ 00:12:06.675 02:59:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:06.675 02:59:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:06.675 02:59:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:06.675 02:59:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:06.675 02:59:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:06.675 02:59:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:06.675 02:59:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:06.675 02:59:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:06.936 { 00:12:06.936 "subsystems": [ 00:12:06.936 { 00:12:06.936 "subsystem": "bdev", 00:12:06.936 "config": [ 00:12:06.936 { 00:12:06.936 "params": { 00:12:06.936 "io_mechanism": "libaio", 00:12:06.936 "conserve_cpu": true, 00:12:06.936 "filename": "/dev/nvme0n1", 00:12:06.936 "name": "xnvme_bdev" 00:12:06.936 }, 00:12:06.936 "method": "bdev_xnvme_create" 00:12:06.936 }, 00:12:06.936 { 00:12:06.936 "method": "bdev_wait_for_examine" 00:12:06.936 } 00:12:06.936 ] 00:12:06.936 } 00:12:06.936 ] 00:12:06.936 } 00:12:06.936 [2024-11-29 02:59:22.696347] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:06.936 [2024-11-29 02:59:22.696492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80792 ] 00:12:06.936 [2024-11-29 02:59:22.843728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.936 [2024-11-29 02:59:22.872526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.199 Running I/O for 5 seconds... 00:12:09.084 31065.00 IOPS, 121.35 MiB/s [2024-11-29T02:59:26.020Z] 30737.00 IOPS, 120.07 MiB/s [2024-11-29T02:59:27.408Z] 31463.33 IOPS, 122.90 MiB/s [2024-11-29T02:59:28.354Z] 30928.25 IOPS, 120.81 MiB/s [2024-11-29T02:59:28.354Z] 30857.20 IOPS, 120.54 MiB/s 00:12:12.362 Latency(us) 00:12:12.362 [2024-11-29T02:59:28.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.362 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:12.362 xnvme_bdev : 5.01 30836.62 120.46 0.00 0.00 2070.75 450.56 8418.86 00:12:12.362 [2024-11-29T02:59:28.354Z] =================================================================================================================== 00:12:12.362 [2024-11-29T02:59:28.354Z] Total : 30836.62 120.46 0.00 0.00 2070.75 450.56 8418.86 00:12:12.362 02:59:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:12.362 02:59:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:12.362 02:59:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:12.362 02:59:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:12.362 02:59:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:12.362 { 00:12:12.362 "subsystems": [ 00:12:12.362 { 00:12:12.362 "subsystem": "bdev", 00:12:12.362 "config": [ 00:12:12.362 { 00:12:12.362 "params": { 00:12:12.362 "io_mechanism": "libaio", 00:12:12.362 "conserve_cpu": true, 00:12:12.362 "filename": "/dev/nvme0n1", 00:12:12.362 "name": "xnvme_bdev" 00:12:12.362 }, 00:12:12.362 "method": "bdev_xnvme_create" 00:12:12.362 }, 00:12:12.362 { 00:12:12.362 "method": "bdev_wait_for_examine" 00:12:12.362 } 00:12:12.362 ] 00:12:12.362 } 00:12:12.362 ] 00:12:12.362 } 00:12:12.362 [2024-11-29 02:59:28.336872] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:12.362 [2024-11-29 02:59:28.337196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80857 ] 00:12:12.623 [2024-11-29 02:59:28.484438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.623 [2024-11-29 02:59:28.516236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.884 Running I/O for 5 seconds... 00:12:14.775 34076.00 IOPS, 133.11 MiB/s [2024-11-29T02:59:31.712Z] 34700.00 IOPS, 135.55 MiB/s [2024-11-29T02:59:32.654Z] 34318.00 IOPS, 134.05 MiB/s [2024-11-29T02:59:34.041Z] 34246.00 IOPS, 133.77 MiB/s 00:12:18.049 Latency(us) 00:12:18.049 [2024-11-29T02:59:34.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.049 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:18.050 xnvme_bdev : 5.00 34415.98 134.44 0.00 0.00 1855.05 204.80 7259.37 00:12:18.050 [2024-11-29T02:59:34.042Z] =================================================================================================================== 00:12:18.050 [2024-11-29T02:59:34.042Z] Total : 34415.98 134.44 0.00 0.00 1855.05 204.80 7259.37 00:12:18.050 00:12:18.050 real 0m11.208s 00:12:18.050 user 0m3.431s 00:12:18.050 sys 0m6.155s 00:12:18.050 02:59:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.050 02:59:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:18.050 ************************************ 00:12:18.050 END TEST xnvme_bdevperf 00:12:18.050 ************************************ 00:12:18.050 02:59:33 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:18.050 02:59:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:18.050 02:59:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.050 02:59:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.050 ************************************ 00:12:18.050 START TEST xnvme_fio_plugin 00:12:18.050 ************************************ 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:18.050 02:59:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:18.050 { 00:12:18.050 "subsystems": [ 00:12:18.050 { 00:12:18.050 "subsystem": "bdev", 00:12:18.050 "config": [ 00:12:18.050 { 00:12:18.050 "params": { 00:12:18.050 "io_mechanism": "libaio", 00:12:18.050 "conserve_cpu": true, 00:12:18.050 "filename": "/dev/nvme0n1", 00:12:18.050 "name": "xnvme_bdev" 00:12:18.050 }, 00:12:18.050 "method": "bdev_xnvme_create" 00:12:18.050 }, 00:12:18.050 { 00:12:18.050 "method": "bdev_wait_for_examine" 00:12:18.050 } 00:12:18.050 ] 00:12:18.050 } 00:12:18.050 ] 00:12:18.050 } 00:12:18.312 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:18.312 fio-3.35 00:12:18.312 Starting 1 thread 00:12:23.609 00:12:23.609 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80965: Fri Nov 29 02:59:39 2024 00:12:23.609 read: IOPS=33.0k, BW=129MiB/s (135MB/s)(644MiB/5001msec) 00:12:23.609 slat (usec): min=4, max=2480, avg=21.43, stdev=95.22 00:12:23.609 clat (usec): min=108, max=4874, avg=1355.77, stdev=558.05 00:12:23.609 lat (usec): min=174, max=4893, avg=1377.20, stdev=549.44 00:12:23.609 clat percentiles (usec): 00:12:23.609 | 1.00th=[ 273], 5.00th=[ 498], 10.00th=[ 660], 20.00th=[ 873], 00:12:23.609 | 30.00th=[ 1045], 40.00th=[ 1188], 50.00th=[ 1336], 60.00th=[ 1467], 00:12:23.609 | 70.00th=[ 1614], 80.00th=[ 1795], 90.00th=[ 2057], 95.00th=[ 2311], 00:12:23.609 | 99.00th=[ 2900], 99.50th=[ 3195], 99.90th=[ 3752], 99.95th=[ 3949], 00:12:23.609 | 99.99th=[ 4113] 00:12:23.609 bw ( KiB/s): min=119696, max=138952, per=99.36%, avg=131116.44, stdev=6385.86, samples=9 00:12:23.609 iops : min=29924, max=34738, avg=32779.11, stdev=1596.47, samples=9 00:12:23.609 lat (usec) : 250=0.69%, 500=4.38%, 750=8.76%, 1000=13.17% 00:12:23.609 lat (msec) : 2=61.23%, 4=11.74%, 10=0.03% 00:12:23.609 cpu : usr=42.10%, sys=49.38%, ctx=11, majf=0, minf=1065 00:12:23.609 IO depths : 1=0.5%, 2=1.2%, 4=3.1%, 8=8.5%, 16=23.1%, 32=61.5%, >=64=2.1% 00:12:23.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.609 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:23.609 issued rwts: total=164976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.609 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:23.609 00:12:23.609 Run status group 0 (all jobs): 00:12:23.609 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=644MiB (676MB), run=5001-5001msec 00:12:24.182 ----------------------------------------------------- 00:12:24.182 Suppressions used: 00:12:24.182 count bytes template 00:12:24.182 1 11 /usr/src/fio/parse.c 00:12:24.182 1 8 libtcmalloc_minimal.so 00:12:24.182 1 904 libcrypto.so 00:12:24.182 ----------------------------------------------------- 00:12:24.182 00:12:24.182 02:59:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:24.182 02:59:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:24.182 02:59:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:24.182 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:24.183 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:24.183 { 00:12:24.183 "subsystems": [ 00:12:24.183 { 00:12:24.183 "subsystem": "bdev", 00:12:24.183 "config": [ 00:12:24.183 { 00:12:24.183 "params": { 00:12:24.183 "io_mechanism": "libaio", 00:12:24.183 "conserve_cpu": true, 00:12:24.183 "filename": "/dev/nvme0n1", 00:12:24.183 "name": "xnvme_bdev" 00:12:24.183 }, 00:12:24.183 "method": "bdev_xnvme_create" 00:12:24.183 }, 00:12:24.183 { 00:12:24.183 "method": "bdev_wait_for_examine" 00:12:24.183 } 00:12:24.183 ] 00:12:24.183 } 00:12:24.183 ] 00:12:24.183 } 00:12:24.183 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:24.183 fio-3.35 00:12:24.183 Starting 1 thread 00:12:30.776 00:12:30.776 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81046: Fri Nov 29 02:59:45 2024 00:12:30.776 write: IOPS=33.9k, BW=132MiB/s (139MB/s)(662MiB/5001msec); 0 zone resets 00:12:30.776 slat (usec): min=4, max=2330, avg=21.71, stdev=91.02 00:12:30.776 clat (usec): min=87, max=4903, avg=1302.68, stdev=576.28 00:12:30.776 lat (usec): min=178, max=4973, avg=1324.38, stdev=569.79 00:12:30.776 clat percentiles (usec): 00:12:30.776 | 1.00th=[ 273], 5.00th=[ 453], 10.00th=[ 611], 20.00th=[ 807], 00:12:30.776 | 30.00th=[ 971], 40.00th=[ 1123], 50.00th=[ 1254], 60.00th=[ 1401], 00:12:30.776 | 70.00th=[ 1549], 80.00th=[ 1729], 90.00th=[ 2024], 95.00th=[ 2311], 00:12:30.776 | 99.00th=[ 3032], 99.50th=[ 3294], 99.90th=[ 3916], 99.95th=[ 4228], 00:12:30.776 | 99.99th=[ 4752] 00:12:30.776 bw ( KiB/s): min=125560, max=142872, per=99.81%, avg=135333.78, stdev=5912.43, samples=9 00:12:30.776 iops : min=31390, max=35718, avg=33833.33, stdev=1478.06, samples=9 00:12:30.776 lat (usec) : 100=0.01%, 250=0.73%, 500=5.48%, 750=10.31%, 1000=15.32% 00:12:30.776 lat (msec) : 2=57.72%, 4=10.34%, 10=0.08% 00:12:30.776 cpu : usr=40.26%, sys=50.72%, ctx=12, majf=0, minf=1066 00:12:30.776 IO depths : 1=0.4%, 2=1.1%, 4=2.9%, 8=8.3%, 16=23.6%, 32=61.7%, >=64=2.1% 00:12:30.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.776 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:30.776 issued rwts: total=0,169521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.776 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:30.776 00:12:30.776 Run status group 0 (all jobs): 00:12:30.776 WRITE: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=662MiB (694MB), run=5001-5001msec 00:12:30.776 ----------------------------------------------------- 00:12:30.776 Suppressions used: 00:12:30.776 count bytes template 00:12:30.776 1 11 /usr/src/fio/parse.c 00:12:30.776 1 8 libtcmalloc_minimal.so 00:12:30.776 1 904 libcrypto.so 00:12:30.776 ----------------------------------------------------- 00:12:30.776 00:12:30.776 00:12:30.776 real 0m12.055s 00:12:30.776 user 0m5.255s 00:12:30.776 sys 0m5.548s 00:12:30.776 ************************************ 00:12:30.776 END TEST xnvme_fio_plugin 00:12:30.776 ************************************ 00:12:30.776 02:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.776 02:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:30.776 02:59:46 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:30.776 02:59:46 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:30.776 02:59:46 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:30.776 02:59:46 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:30.776 02:59:46 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:30.776 02:59:46 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:30.776 02:59:46 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:30.776 02:59:46 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:30.776 02:59:46 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:30.776 02:59:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:30.776 02:59:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.776 02:59:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:30.776 ************************************ 00:12:30.776 START TEST xnvme_rpc 00:12:30.776 ************************************ 00:12:30.776 02:59:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:30.776 02:59:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:30.776 02:59:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:30.776 02:59:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:30.776 02:59:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:30.776 02:59:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81121 00:12:30.776 02:59:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81121 00:12:30.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.776 02:59:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81121 ']' 00:12:30.776 02:59:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.776 02:59:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.776 02:59:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.776 02:59:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.776 02:59:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:30.776 02:59:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.776 [2024-11-29 02:59:46.111440] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:30.776 [2024-11-29 02:59:46.111593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81121 ] 00:12:30.776 [2024-11-29 02:59:46.250512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.776 [2024-11-29 02:59:46.278774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.038 02:59:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.038 02:59:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:31.038 02:59:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:12:31.038 02:59:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.038 02:59:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.038 xnvme_bdev 00:12:31.038 02:59:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.038 02:59:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:31.038 02:59:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:31.038 02:59:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.038 02:59:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.038 02:59:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:31.038 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.038 02:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:31.038 02:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81121 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81121 ']' 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81121 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81121 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:31.300 killing process with pid 81121 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81121' 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81121 00:12:31.300 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81121 00:12:31.562 00:12:31.562 real 0m1.473s 00:12:31.562 user 0m1.544s 00:12:31.562 sys 0m0.416s 00:12:31.562 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.562 ************************************ 00:12:31.562 END TEST xnvme_rpc 00:12:31.562 ************************************ 00:12:31.562 02:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.824 02:59:47 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:31.824 02:59:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:31.824 02:59:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.824 02:59:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:31.824 ************************************ 00:12:31.824 START TEST xnvme_bdevperf 00:12:31.824 ************************************ 00:12:31.824 02:59:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:31.824 02:59:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:31.824 02:59:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:12:31.824 02:59:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:31.824 02:59:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:31.824 02:59:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:31.824 02:59:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:31.824 02:59:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:31.824 { 00:12:31.824 "subsystems": [ 00:12:31.824 { 00:12:31.824 "subsystem": "bdev", 00:12:31.824 "config": [ 00:12:31.824 { 00:12:31.824 "params": { 00:12:31.824 "io_mechanism": "io_uring", 00:12:31.824 "conserve_cpu": false, 00:12:31.824 "filename": "/dev/nvme0n1", 00:12:31.824 "name": "xnvme_bdev" 00:12:31.824 }, 00:12:31.824 "method": "bdev_xnvme_create" 00:12:31.824 }, 00:12:31.824 { 00:12:31.824 "method": "bdev_wait_for_examine" 00:12:31.824 } 00:12:31.824 ] 00:12:31.824 } 00:12:31.824 ] 00:12:31.824 } 00:12:31.824 [2024-11-29 02:59:47.638470] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:31.824 [2024-11-29 02:59:47.638602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81184 ] 00:12:31.824 [2024-11-29 02:59:47.776798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.824 [2024-11-29 02:59:47.805109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.085 Running I/O for 5 seconds... 00:12:34.042 34372.00 IOPS, 134.27 MiB/s [2024-11-29T02:59:50.979Z] 34682.00 IOPS, 135.48 MiB/s [2024-11-29T02:59:51.925Z] 34422.33 IOPS, 134.46 MiB/s [2024-11-29T02:59:53.312Z] 34309.25 IOPS, 134.02 MiB/s [2024-11-29T02:59:53.312Z] 33871.40 IOPS, 132.31 MiB/s 00:12:37.320 Latency(us) 00:12:37.320 [2024-11-29T02:59:53.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.320 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:37.320 xnvme_bdev : 5.01 33846.87 132.21 0.00 0.00 1887.06 863.31 6906.49 00:12:37.320 [2024-11-29T02:59:53.312Z] =================================================================================================================== 00:12:37.320 [2024-11-29T02:59:53.312Z] Total : 33846.87 132.21 0.00 0.00 1887.06 863.31 6906.49 00:12:37.320 02:59:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:37.320 02:59:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:37.320 02:59:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:37.320 02:59:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:37.320 02:59:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:37.320 { 00:12:37.320 "subsystems": [ 00:12:37.320 { 00:12:37.320 "subsystem": "bdev", 00:12:37.320 "config": [ 00:12:37.320 { 00:12:37.320 "params": { 00:12:37.320 "io_mechanism": "io_uring", 00:12:37.320 "conserve_cpu": false, 00:12:37.320 "filename": "/dev/nvme0n1", 00:12:37.320 "name": "xnvme_bdev" 00:12:37.320 }, 00:12:37.320 "method": "bdev_xnvme_create" 00:12:37.320 }, 00:12:37.320 { 00:12:37.320 "method": "bdev_wait_for_examine" 00:12:37.320 } 00:12:37.320 ] 00:12:37.320 } 00:12:37.320 ] 00:12:37.320 } 00:12:37.320 [2024-11-29 02:59:53.172584] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:37.320 [2024-11-29 02:59:53.172724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81249 ] 00:12:37.581 [2024-11-29 02:59:53.318885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.581 [2024-11-29 02:59:53.347561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.581 Running I/O for 5 seconds... 00:12:39.926 34161.00 IOPS, 133.44 MiB/s [2024-11-29T02:59:56.490Z] 34092.50 IOPS, 133.17 MiB/s [2024-11-29T02:59:57.878Z] 34263.33 IOPS, 133.84 MiB/s [2024-11-29T02:59:58.822Z] 34071.25 IOPS, 133.09 MiB/s 00:12:42.830 Latency(us) 00:12:42.830 [2024-11-29T02:59:58.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.830 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:42.830 xnvme_bdev : 5.00 34113.34 133.26 0.00 0.00 1872.11 253.64 9830.40 00:12:42.830 [2024-11-29T02:59:58.822Z] =================================================================================================================== 00:12:42.830 [2024-11-29T02:59:58.822Z] Total : 34113.34 133.26 0.00 0.00 1872.11 253.64 9830.40 00:12:42.830 00:12:42.830 real 0m11.054s 00:12:42.830 user 0m4.620s 00:12:42.830 sys 0m6.182s 00:12:42.830 02:59:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.830 02:59:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:42.830 ************************************ 00:12:42.830 END TEST xnvme_bdevperf 00:12:42.830 ************************************ 00:12:42.830 02:59:58 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:42.830 02:59:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:42.830 02:59:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.830 02:59:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:42.830 ************************************ 00:12:42.830 START TEST xnvme_fio_plugin 00:12:42.830 ************************************ 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:42.830 02:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:42.830 { 00:12:42.830 "subsystems": [ 00:12:42.830 { 00:12:42.830 "subsystem": "bdev", 00:12:42.830 "config": [ 00:12:42.830 { 00:12:42.830 "params": { 00:12:42.830 "io_mechanism": "io_uring", 00:12:42.830 "conserve_cpu": false, 00:12:42.830 "filename": "/dev/nvme0n1", 00:12:42.830 "name": "xnvme_bdev" 00:12:42.830 }, 00:12:42.830 "method": "bdev_xnvme_create" 00:12:42.830 }, 00:12:42.830 { 00:12:42.830 "method": "bdev_wait_for_examine" 00:12:42.830 } 00:12:42.830 ] 00:12:42.830 } 00:12:42.830 ] 00:12:42.830 } 00:12:43.093 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:43.093 fio-3.35 00:12:43.093 Starting 1 thread 00:12:48.385 00:12:48.385 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81354: Fri Nov 29 03:00:04 2024 00:12:48.385 read: IOPS=32.1k, BW=125MiB/s (131MB/s)(626MiB/5002msec) 00:12:48.385 slat (usec): min=2, max=123, avg= 3.64, stdev= 2.06 00:12:48.385 clat (usec): min=952, max=3766, avg=1847.90, stdev=319.93 00:12:48.385 lat (usec): min=955, max=3799, avg=1851.55, stdev=320.31 00:12:48.385 clat percentiles (usec): 00:12:48.385 | 1.00th=[ 1237], 5.00th=[ 1401], 10.00th=[ 1483], 20.00th=[ 1582], 00:12:48.385 | 30.00th=[ 1663], 40.00th=[ 1745], 50.00th=[ 1811], 60.00th=[ 1893], 00:12:48.385 | 70.00th=[ 1975], 80.00th=[ 2089], 90.00th=[ 2278], 95.00th=[ 2409], 00:12:48.386 | 99.00th=[ 2769], 99.50th=[ 2900], 99.90th=[ 3294], 99.95th=[ 3425], 00:12:48.386 | 99.99th=[ 3589] 00:12:48.386 bw ( KiB/s): min=125440, max=131072, per=99.98%, avg=128227.56, stdev=1865.66, samples=9 00:12:48.386 iops : min=31360, max=32768, avg=32056.89, stdev=466.42, samples=9 00:12:48.386 lat (usec) : 1000=0.01% 00:12:48.386 lat (msec) : 2=72.18%, 4=27.81% 00:12:48.386 cpu : usr=32.51%, sys=66.29%, ctx=14, majf=0, minf=1063 00:12:48.386 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:12:48.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.386 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:12:48.386 issued rwts: total=160376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:48.386 00:12:48.386 Run status group 0 (all jobs): 00:12:48.386 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=626MiB (657MB), run=5002-5002msec 00:12:48.960 ----------------------------------------------------- 00:12:48.960 Suppressions used: 00:12:48.960 count bytes template 00:12:48.960 1 11 /usr/src/fio/parse.c 00:12:48.960 1 8 libtcmalloc_minimal.so 00:12:48.960 1 904 libcrypto.so 00:12:48.960 ----------------------------------------------------- 00:12:48.960 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:48.960 03:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.960 { 00:12:48.960 "subsystems": [ 00:12:48.960 { 00:12:48.960 "subsystem": "bdev", 00:12:48.960 "config": [ 00:12:48.960 { 00:12:48.960 "params": { 00:12:48.960 "io_mechanism": "io_uring", 00:12:48.960 "conserve_cpu": false, 00:12:48.960 "filename": "/dev/nvme0n1", 00:12:48.960 "name": "xnvme_bdev" 00:12:48.960 }, 00:12:48.960 "method": "bdev_xnvme_create" 00:12:48.960 }, 00:12:48.960 { 00:12:48.960 "method": "bdev_wait_for_examine" 00:12:48.960 } 00:12:48.960 ] 00:12:48.960 } 00:12:48.960 ] 00:12:48.960 } 00:12:48.960 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:48.960 fio-3.35 00:12:48.960 Starting 1 thread 00:12:55.568 00:12:55.568 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81440: Fri Nov 29 03:00:10 2024 00:12:55.568 write: IOPS=33.2k, BW=130MiB/s (136MB/s)(649MiB/5002msec); 0 zone resets 00:12:55.568 slat (nsec): min=2911, max=52575, avg=3692.77, stdev=1953.56 00:12:55.568 clat (usec): min=383, max=6347, avg=1776.47, stdev=293.66 00:12:55.568 lat (usec): min=387, max=6350, avg=1780.17, stdev=294.05 00:12:55.568 clat percentiles (usec): 00:12:55.568 | 1.00th=[ 1237], 5.00th=[ 1369], 10.00th=[ 1450], 20.00th=[ 1532], 00:12:55.568 | 30.00th=[ 1614], 40.00th=[ 1680], 50.00th=[ 1745], 60.00th=[ 1811], 00:12:55.568 | 70.00th=[ 1893], 80.00th=[ 1991], 90.00th=[ 2147], 95.00th=[ 2311], 00:12:55.568 | 99.00th=[ 2671], 99.50th=[ 2868], 99.90th=[ 3195], 99.95th=[ 3294], 00:12:55.568 | 99.99th=[ 3785] 00:12:55.568 bw ( KiB/s): min=131552, max=136704, per=100.00%, avg=133216.00, stdev=1647.96, samples=9 00:12:55.568 iops : min=32888, max=34176, avg=33304.00, stdev=411.99, samples=9 00:12:55.568 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:55.568 lat (msec) : 2=81.02%, 4=18.96%, 10=0.01% 00:12:55.568 cpu : usr=33.49%, sys=65.21%, ctx=12, majf=0, minf=1064 00:12:55.568 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:12:55.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.568 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:12:55.568 issued rwts: total=0,166124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:55.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:55.568 00:12:55.568 Run status group 0 (all jobs): 00:12:55.568 WRITE: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=649MiB (680MB), run=5002-5002msec 00:12:55.568 ----------------------------------------------------- 00:12:55.568 Suppressions used: 00:12:55.568 count bytes template 00:12:55.568 1 11 /usr/src/fio/parse.c 00:12:55.568 1 8 libtcmalloc_minimal.so 00:12:55.568 1 904 libcrypto.so 00:12:55.568 ----------------------------------------------------- 00:12:55.568 00:12:55.568 ************************************ 00:12:55.568 END TEST xnvme_fio_plugin 00:12:55.568 ************************************ 00:12:55.568 00:12:55.568 real 0m12.002s 00:12:55.568 user 0m4.486s 00:12:55.568 sys 0m7.063s 00:12:55.568 03:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.568 03:00:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:55.568 03:00:10 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:55.568 03:00:10 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:55.568 03:00:10 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:55.568 03:00:10 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:55.568 03:00:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:55.568 03:00:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.568 03:00:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.568 ************************************ 00:12:55.568 START TEST xnvme_rpc 00:12:55.568 ************************************ 00:12:55.568 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:55.568 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:55.568 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:55.568 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:55.568 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:55.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.568 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81515 00:12:55.568 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81515 00:12:55.568 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81515 ']' 00:12:55.568 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.568 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.568 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.568 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.568 03:00:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.568 03:00:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:55.568 [2024-11-29 03:00:10.851897] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:55.568 [2024-11-29 03:00:10.852052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81515 ] 00:12:55.568 [2024-11-29 03:00:10.991105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.568 [2024-11-29 03:00:11.020648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.828 xnvme_bdev 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.828 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81515 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81515 ']' 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81515 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81515 00:12:56.088 killing process with pid 81515 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81515' 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81515 00:12:56.088 03:00:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81515 00:12:56.348 ************************************ 00:12:56.348 END TEST xnvme_rpc 00:12:56.348 ************************************ 00:12:56.348 00:12:56.348 real 0m1.441s 00:12:56.348 user 0m1.509s 00:12:56.348 sys 0m0.420s 00:12:56.348 03:00:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.348 03:00:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.348 03:00:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:56.348 03:00:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:56.348 03:00:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.348 03:00:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.348 ************************************ 00:12:56.348 START TEST xnvme_bdevperf 00:12:56.348 ************************************ 00:12:56.348 03:00:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:56.348 03:00:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:56.348 03:00:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:12:56.348 03:00:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:56.348 03:00:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:56.348 03:00:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:56.348 03:00:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:56.348 03:00:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:56.348 { 00:12:56.348 "subsystems": [ 00:12:56.348 { 00:12:56.348 "subsystem": "bdev", 00:12:56.348 "config": [ 00:12:56.348 { 00:12:56.348 "params": { 00:12:56.348 "io_mechanism": "io_uring", 00:12:56.348 "conserve_cpu": true, 00:12:56.348 "filename": "/dev/nvme0n1", 00:12:56.348 "name": "xnvme_bdev" 00:12:56.348 }, 00:12:56.348 "method": "bdev_xnvme_create" 00:12:56.348 }, 00:12:56.348 { 00:12:56.348 "method": "bdev_wait_for_examine" 00:12:56.348 } 00:12:56.348 ] 00:12:56.348 } 00:12:56.348 ] 00:12:56.348 } 00:12:56.610 [2024-11-29 03:00:12.343721] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:56.610 [2024-11-29 03:00:12.344185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81573 ] 00:12:56.610 [2024-11-29 03:00:12.492006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.610 [2024-11-29 03:00:12.521714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.870 Running I/O for 5 seconds... 00:12:58.755 32202.00 IOPS, 125.79 MiB/s [2024-11-29T03:00:15.691Z] 32363.50 IOPS, 126.42 MiB/s [2024-11-29T03:00:17.080Z] 32428.00 IOPS, 126.67 MiB/s [2024-11-29T03:00:17.654Z] 32413.00 IOPS, 126.61 MiB/s [2024-11-29T03:00:17.654Z] 32378.60 IOPS, 126.48 MiB/s 00:13:01.662 Latency(us) 00:13:01.662 [2024-11-29T03:00:17.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.662 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:01.662 xnvme_bdev : 5.00 32370.59 126.45 0.00 0.00 1972.94 875.91 8570.09 00:13:01.662 [2024-11-29T03:00:17.654Z] =================================================================================================================== 00:13:01.662 [2024-11-29T03:00:17.654Z] Total : 32370.59 126.45 0.00 0.00 1972.94 875.91 8570.09 00:13:01.923 03:00:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:01.923 03:00:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:01.923 03:00:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:01.923 03:00:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:01.923 03:00:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:01.923 { 00:13:01.923 "subsystems": [ 00:13:01.923 { 00:13:01.923 "subsystem": "bdev", 00:13:01.923 "config": [ 00:13:01.923 { 00:13:01.923 "params": { 00:13:01.923 "io_mechanism": "io_uring", 00:13:01.923 "conserve_cpu": true, 00:13:01.923 "filename": "/dev/nvme0n1", 00:13:01.923 "name": "xnvme_bdev" 00:13:01.923 }, 00:13:01.923 "method": "bdev_xnvme_create" 00:13:01.923 }, 00:13:01.923 { 00:13:01.923 "method": "bdev_wait_for_examine" 00:13:01.923 } 00:13:01.923 ] 00:13:01.923 } 00:13:01.923 ] 00:13:01.923 } 00:13:01.923 [2024-11-29 03:00:17.890001] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:01.923 [2024-11-29 03:00:17.890137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81641 ] 00:13:02.184 [2024-11-29 03:00:18.039207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.184 [2024-11-29 03:00:18.070065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.184 Running I/O for 5 seconds... 00:13:04.510 34291.00 IOPS, 133.95 MiB/s [2024-11-29T03:00:21.446Z] 33988.50 IOPS, 132.77 MiB/s [2024-11-29T03:00:22.389Z] 33613.00 IOPS, 131.30 MiB/s [2024-11-29T03:00:23.333Z] 34179.00 IOPS, 133.51 MiB/s [2024-11-29T03:00:23.333Z] 35164.00 IOPS, 137.36 MiB/s 00:13:07.341 Latency(us) 00:13:07.341 [2024-11-29T03:00:23.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.341 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:07.341 xnvme_bdev : 5.00 35152.47 137.31 0.00 0.00 1816.55 718.38 5116.85 00:13:07.341 [2024-11-29T03:00:23.333Z] =================================================================================================================== 00:13:07.341 [2024-11-29T03:00:23.333Z] Total : 35152.47 137.31 0.00 0.00 1816.55 718.38 5116.85 00:13:07.602 00:13:07.602 real 0m11.088s 00:13:07.602 user 0m6.983s 00:13:07.602 sys 0m3.559s 00:13:07.602 ************************************ 00:13:07.602 END TEST xnvme_bdevperf 00:13:07.602 ************************************ 00:13:07.602 03:00:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.602 03:00:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:07.602 03:00:23 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:07.602 03:00:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:07.602 03:00:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.602 03:00:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.602 ************************************ 00:13:07.602 START TEST xnvme_fio_plugin 00:13:07.602 ************************************ 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:07.602 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:07.603 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:07.603 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:07.603 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:07.603 03:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:07.603 { 00:13:07.603 "subsystems": [ 00:13:07.603 { 00:13:07.603 "subsystem": "bdev", 00:13:07.603 "config": [ 00:13:07.603 { 00:13:07.603 "params": { 00:13:07.603 "io_mechanism": "io_uring", 00:13:07.603 "conserve_cpu": true, 00:13:07.603 "filename": "/dev/nvme0n1", 00:13:07.603 "name": "xnvme_bdev" 00:13:07.603 }, 00:13:07.603 "method": "bdev_xnvme_create" 00:13:07.603 }, 00:13:07.603 { 00:13:07.603 "method": "bdev_wait_for_examine" 00:13:07.603 } 00:13:07.603 ] 00:13:07.603 } 00:13:07.603 ] 00:13:07.603 } 00:13:07.864 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:07.864 fio-3.35 00:13:07.864 Starting 1 thread 00:13:13.160 00:13:13.160 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81745: Fri Nov 29 03:00:29 2024 00:13:13.160 read: IOPS=33.2k, BW=130MiB/s (136MB/s)(648MiB/5001msec) 00:13:13.160 slat (nsec): min=2888, max=88542, avg=3669.39, stdev=2004.20 00:13:13.160 clat (usec): min=904, max=4182, avg=1780.37, stdev=297.95 00:13:13.160 lat (usec): min=907, max=4187, avg=1784.04, stdev=298.39 00:13:13.160 clat percentiles (usec): 00:13:13.160 | 1.00th=[ 1205], 5.00th=[ 1336], 10.00th=[ 1434], 20.00th=[ 1532], 00:13:13.160 | 30.00th=[ 1614], 40.00th=[ 1680], 50.00th=[ 1762], 60.00th=[ 1827], 00:13:13.160 | 70.00th=[ 1909], 80.00th=[ 2008], 90.00th=[ 2180], 95.00th=[ 2311], 00:13:13.160 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 3032], 99.95th=[ 3261], 00:13:13.160 | 99.99th=[ 4113] 00:13:13.160 bw ( KiB/s): min=128000, max=139264, per=99.52%, avg=132096.00, stdev=4079.97, samples=9 00:13:13.160 iops : min=32000, max=34816, avg=33024.00, stdev=1019.99, samples=9 00:13:13.160 lat (usec) : 1000=0.05% 00:13:13.160 lat (msec) : 2=78.79%, 4=21.13%, 10=0.03% 00:13:13.160 cpu : usr=60.08%, sys=36.10%, ctx=13, majf=0, minf=1063 00:13:13.160 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:13.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.160 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:13.160 issued rwts: total=165952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:13.160 00:13:13.160 Run status group 0 (all jobs): 00:13:13.160 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=648MiB (680MB), run=5001-5001msec 00:13:13.421 ----------------------------------------------------- 00:13:13.421 Suppressions used: 00:13:13.421 count bytes template 00:13:13.421 1 11 /usr/src/fio/parse.c 00:13:13.421 1 8 libtcmalloc_minimal.so 00:13:13.421 1 904 libcrypto.so 00:13:13.421 ----------------------------------------------------- 00:13:13.421 00:13:13.421 03:00:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:13.421 03:00:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:13.422 03:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.422 { 00:13:13.422 "subsystems": [ 00:13:13.422 { 00:13:13.422 "subsystem": "bdev", 00:13:13.422 "config": [ 00:13:13.422 { 00:13:13.422 "params": { 00:13:13.422 "io_mechanism": "io_uring", 00:13:13.422 "conserve_cpu": true, 00:13:13.422 "filename": "/dev/nvme0n1", 00:13:13.422 "name": "xnvme_bdev" 00:13:13.422 }, 00:13:13.422 "method": "bdev_xnvme_create" 00:13:13.422 }, 00:13:13.422 { 00:13:13.422 "method": "bdev_wait_for_examine" 00:13:13.422 } 00:13:13.422 ] 00:13:13.422 } 00:13:13.422 ] 00:13:13.422 } 00:13:13.683 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:13.683 fio-3.35 00:13:13.683 Starting 1 thread 00:13:18.974 00:13:18.974 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81826: Fri Nov 29 03:00:34 2024 00:13:18.974 write: IOPS=39.6k, BW=155MiB/s (162MB/s)(775MiB/5001msec); 0 zone resets 00:13:18.974 slat (usec): min=2, max=215, avg= 3.57, stdev= 1.78 00:13:18.974 clat (usec): min=709, max=5602, avg=1475.88, stdev=255.37 00:13:18.974 lat (usec): min=713, max=5606, avg=1479.45, stdev=255.78 00:13:18.974 clat percentiles (usec): 00:13:18.974 | 1.00th=[ 1045], 5.00th=[ 1139], 10.00th=[ 1188], 20.00th=[ 1270], 00:13:18.974 | 30.00th=[ 1336], 40.00th=[ 1385], 50.00th=[ 1450], 60.00th=[ 1500], 00:13:18.974 | 70.00th=[ 1565], 80.00th=[ 1647], 90.00th=[ 1795], 95.00th=[ 1942], 00:13:18.974 | 99.00th=[ 2278], 99.50th=[ 2409], 99.90th=[ 2868], 99.95th=[ 2933], 00:13:18.974 | 99.99th=[ 4490] 00:13:18.974 bw ( KiB/s): min=155504, max=161504, per=100.00%, avg=158666.67, stdev=1942.68, samples=9 00:13:18.974 iops : min=38876, max=40376, avg=39666.67, stdev=485.67, samples=9 00:13:18.974 lat (usec) : 750=0.01%, 1000=0.46% 00:13:18.974 lat (msec) : 2=95.96%, 4=3.56%, 10=0.02% 00:13:18.974 cpu : usr=70.40%, sys=26.40%, ctx=28, majf=0, minf=1064 00:13:18.974 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:18.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.974 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:18.974 issued rwts: total=0,198278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.974 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:18.974 00:13:18.974 Run status group 0 (all jobs): 00:13:18.974 WRITE: bw=155MiB/s (162MB/s), 155MiB/s-155MiB/s (162MB/s-162MB/s), io=775MiB (812MB), run=5001-5001msec 00:13:19.545 ----------------------------------------------------- 00:13:19.545 Suppressions used: 00:13:19.545 count bytes template 00:13:19.545 1 11 /usr/src/fio/parse.c 00:13:19.545 1 8 libtcmalloc_minimal.so 00:13:19.545 1 904 libcrypto.so 00:13:19.545 ----------------------------------------------------- 00:13:19.545 00:13:19.545 00:13:19.545 real 0m11.839s 00:13:19.545 user 0m7.555s 00:13:19.545 sys 0m3.614s 00:13:19.545 03:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.545 ************************************ 00:13:19.546 END TEST xnvme_fio_plugin 00:13:19.546 ************************************ 00:13:19.546 03:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:19.546 03:00:35 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:19.546 03:00:35 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:19.546 03:00:35 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:19.546 03:00:35 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:19.546 03:00:35 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:19.546 03:00:35 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:19.546 03:00:35 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:19.546 03:00:35 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:19.546 03:00:35 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:19.546 03:00:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:19.546 03:00:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.546 03:00:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.546 ************************************ 00:13:19.546 START TEST xnvme_rpc 00:13:19.546 ************************************ 00:13:19.546 03:00:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:19.546 03:00:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:19.546 03:00:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:19.546 03:00:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:19.546 03:00:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:19.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.546 03:00:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81906 00:13:19.546 03:00:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81906 00:13:19.546 03:00:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81906 ']' 00:13:19.546 03:00:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.546 03:00:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:19.546 03:00:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.546 03:00:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:19.546 03:00:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.546 03:00:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:19.546 [2024-11-29 03:00:35.420193] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:19.546 [2024-11-29 03:00:35.420339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81906 ] 00:13:19.805 [2024-11-29 03:00:35.566515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.805 [2024-11-29 03:00:35.586607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.372 xnvme_bdev 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.372 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.631 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.631 03:00:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81906 00:13:20.631 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81906 ']' 00:13:20.631 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81906 00:13:20.631 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:20.631 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.631 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81906 00:13:20.631 killing process with pid 81906 00:13:20.631 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.631 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.631 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81906' 00:13:20.631 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81906 00:13:20.631 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81906 00:13:20.891 ************************************ 00:13:20.891 END TEST xnvme_rpc 00:13:20.891 ************************************ 00:13:20.891 00:13:20.891 real 0m1.291s 00:13:20.891 user 0m1.418s 00:13:20.891 sys 0m0.318s 00:13:20.891 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.891 03:00:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.891 03:00:36 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:20.891 03:00:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:20.891 03:00:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.891 03:00:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.891 ************************************ 00:13:20.891 START TEST xnvme_bdevperf 00:13:20.891 ************************************ 00:13:20.892 03:00:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:20.892 03:00:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:20.892 03:00:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:13:20.892 03:00:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:20.892 03:00:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:20.892 03:00:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:20.892 03:00:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:20.892 03:00:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:20.892 { 00:13:20.892 "subsystems": [ 00:13:20.892 { 00:13:20.892 "subsystem": "bdev", 00:13:20.892 "config": [ 00:13:20.892 { 00:13:20.892 "params": { 00:13:20.892 "io_mechanism": "io_uring_cmd", 00:13:20.892 "conserve_cpu": false, 00:13:20.892 "filename": "/dev/ng0n1", 00:13:20.892 "name": "xnvme_bdev" 00:13:20.892 }, 00:13:20.892 "method": "bdev_xnvme_create" 00:13:20.892 }, 00:13:20.892 { 00:13:20.892 "method": "bdev_wait_for_examine" 00:13:20.892 } 00:13:20.892 ] 00:13:20.892 } 00:13:20.892 ] 00:13:20.892 } 00:13:20.892 [2024-11-29 03:00:36.732714] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:20.892 [2024-11-29 03:00:36.732853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81964 ] 00:13:20.892 [2024-11-29 03:00:36.879533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.153 [2024-11-29 03:00:36.899235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.153 Running I/O for 5 seconds... 00:13:23.042 35913.00 IOPS, 140.29 MiB/s [2024-11-29T03:00:40.424Z] 38338.00 IOPS, 149.76 MiB/s [2024-11-29T03:00:40.998Z] 36882.00 IOPS, 144.07 MiB/s [2024-11-29T03:00:42.387Z] 36966.00 IOPS, 144.40 MiB/s [2024-11-29T03:00:42.387Z] 37132.80 IOPS, 145.05 MiB/s 00:13:26.395 Latency(us) 00:13:26.395 [2024-11-29T03:00:42.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.395 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:26.395 xnvme_bdev : 5.00 37120.71 145.00 0.00 0.00 1720.15 450.56 10183.29 00:13:26.395 [2024-11-29T03:00:42.387Z] =================================================================================================================== 00:13:26.395 [2024-11-29T03:00:42.387Z] Total : 37120.71 145.00 0.00 0.00 1720.15 450.56 10183.29 00:13:26.395 03:00:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:26.395 03:00:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:26.395 03:00:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:26.395 03:00:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:26.395 03:00:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:26.395 { 00:13:26.395 "subsystems": [ 00:13:26.395 { 00:13:26.395 "subsystem": "bdev", 00:13:26.395 "config": [ 00:13:26.395 { 00:13:26.395 "params": { 00:13:26.395 "io_mechanism": "io_uring_cmd", 00:13:26.395 "conserve_cpu": false, 00:13:26.395 "filename": "/dev/ng0n1", 00:13:26.395 "name": "xnvme_bdev" 00:13:26.395 }, 00:13:26.395 "method": "bdev_xnvme_create" 00:13:26.395 }, 00:13:26.395 { 00:13:26.396 "method": "bdev_wait_for_examine" 00:13:26.396 } 00:13:26.396 ] 00:13:26.396 } 00:13:26.396 ] 00:13:26.396 } 00:13:26.396 [2024-11-29 03:00:42.202760] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:26.396 [2024-11-29 03:00:42.202909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82027 ] 00:13:26.396 [2024-11-29 03:00:42.351739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.396 [2024-11-29 03:00:42.379993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.658 Running I/O for 5 seconds... 00:13:28.546 34670.00 IOPS, 135.43 MiB/s [2024-11-29T03:00:45.923Z] 36307.00 IOPS, 141.82 MiB/s [2024-11-29T03:00:46.499Z] 37292.00 IOPS, 145.67 MiB/s [2024-11-29T03:00:47.884Z] 37682.50 IOPS, 147.20 MiB/s [2024-11-29T03:00:47.884Z] 37833.40 IOPS, 147.79 MiB/s 00:13:31.892 Latency(us) 00:13:31.892 [2024-11-29T03:00:47.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.892 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:31.892 xnvme_bdev : 5.00 37813.35 147.71 0.00 0.00 1688.32 384.39 4889.99 00:13:31.892 [2024-11-29T03:00:47.884Z] =================================================================================================================== 00:13:31.892 [2024-11-29T03:00:47.884Z] Total : 37813.35 147.71 0.00 0.00 1688.32 384.39 4889.99 00:13:31.892 03:00:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:31.892 03:00:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:13:31.892 03:00:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:31.892 03:00:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:31.892 03:00:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:31.892 { 00:13:31.892 "subsystems": [ 00:13:31.892 { 00:13:31.892 "subsystem": "bdev", 00:13:31.892 "config": [ 00:13:31.892 { 00:13:31.892 "params": { 00:13:31.892 "io_mechanism": "io_uring_cmd", 00:13:31.892 "conserve_cpu": false, 00:13:31.892 "filename": "/dev/ng0n1", 00:13:31.892 "name": "xnvme_bdev" 00:13:31.892 }, 00:13:31.892 "method": "bdev_xnvme_create" 00:13:31.892 }, 00:13:31.892 { 00:13:31.892 "method": "bdev_wait_for_examine" 00:13:31.892 } 00:13:31.892 ] 00:13:31.892 } 00:13:31.892 ] 00:13:31.892 } 00:13:31.892 [2024-11-29 03:00:47.700298] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:31.892 [2024-11-29 03:00:47.700547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82092 ] 00:13:31.892 [2024-11-29 03:00:47.845754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.892 [2024-11-29 03:00:47.876893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.153 Running I/O for 5 seconds... 00:13:34.042 77632.00 IOPS, 303.25 MiB/s [2024-11-29T03:00:50.978Z] 77696.00 IOPS, 303.50 MiB/s [2024-11-29T03:00:52.367Z] 78144.00 IOPS, 305.25 MiB/s [2024-11-29T03:00:53.302Z] 77088.00 IOPS, 301.12 MiB/s [2024-11-29T03:00:53.302Z] 78694.40 IOPS, 307.40 MiB/s 00:13:37.310 Latency(us) 00:13:37.310 [2024-11-29T03:00:53.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.310 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:13:37.310 xnvme_bdev : 5.00 78661.94 307.27 0.00 0.00 810.17 532.48 2760.07 00:13:37.310 [2024-11-29T03:00:53.303Z] =================================================================================================================== 00:13:37.311 [2024-11-29T03:00:53.303Z] Total : 78661.94 307.27 0.00 0.00 810.17 532.48 2760.07 00:13:37.311 03:00:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:37.311 03:00:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:13:37.311 03:00:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:37.311 03:00:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:37.311 03:00:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:37.311 { 00:13:37.311 "subsystems": [ 00:13:37.311 { 00:13:37.311 "subsystem": "bdev", 00:13:37.311 "config": [ 00:13:37.311 { 00:13:37.311 "params": { 00:13:37.311 "io_mechanism": "io_uring_cmd", 00:13:37.311 "conserve_cpu": false, 00:13:37.311 "filename": "/dev/ng0n1", 00:13:37.311 "name": "xnvme_bdev" 00:13:37.311 }, 00:13:37.311 "method": "bdev_xnvme_create" 00:13:37.311 }, 00:13:37.311 { 00:13:37.311 "method": "bdev_wait_for_examine" 00:13:37.311 } 00:13:37.311 ] 00:13:37.311 } 00:13:37.311 ] 00:13:37.311 } 00:13:37.311 [2024-11-29 03:00:53.155381] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:37.311 [2024-11-29 03:00:53.155606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82161 ] 00:13:37.311 [2024-11-29 03:00:53.298125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.569 [2024-11-29 03:00:53.315943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.569 Running I/O for 5 seconds... 00:13:39.457 4683.00 IOPS, 18.29 MiB/s [2024-11-29T03:00:56.386Z] 2826.00 IOPS, 11.04 MiB/s [2024-11-29T03:00:57.772Z] 6076.67 IOPS, 23.74 MiB/s [2024-11-29T03:00:58.716Z] 14543.00 IOPS, 56.81 MiB/s [2024-11-29T03:00:58.716Z] 19097.40 IOPS, 74.60 MiB/s 00:13:42.724 Latency(us) 00:13:42.724 [2024-11-29T03:00:58.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.724 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:13:42.724 xnvme_bdev : 5.00 19092.40 74.58 0.00 0.00 3347.96 59.86 205682.22 00:13:42.724 [2024-11-29T03:00:58.716Z] =================================================================================================================== 00:13:42.724 [2024-11-29T03:00:58.716Z] Total : 19092.40 74.58 0.00 0.00 3347.96 59.86 205682.22 00:13:42.724 00:13:42.724 real 0m21.898s 00:13:42.724 user 0m10.962s 00:13:42.724 sys 0m10.459s 00:13:42.724 ************************************ 00:13:42.724 END TEST xnvme_bdevperf 00:13:42.724 ************************************ 00:13:42.724 03:00:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.724 03:00:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:42.724 03:00:58 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:42.724 03:00:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:42.724 03:00:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.724 03:00:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:42.724 ************************************ 00:13:42.724 START TEST xnvme_fio_plugin 00:13:42.724 ************************************ 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:42.724 03:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.724 { 00:13:42.724 "subsystems": [ 00:13:42.724 { 00:13:42.724 "subsystem": "bdev", 00:13:42.724 "config": [ 00:13:42.724 { 00:13:42.724 "params": { 00:13:42.724 "io_mechanism": "io_uring_cmd", 00:13:42.724 "conserve_cpu": false, 00:13:42.724 "filename": "/dev/ng0n1", 00:13:42.724 "name": "xnvme_bdev" 00:13:42.724 }, 00:13:42.724 "method": "bdev_xnvme_create" 00:13:42.724 }, 00:13:42.724 { 00:13:42.724 "method": "bdev_wait_for_examine" 00:13:42.724 } 00:13:42.724 ] 00:13:42.724 } 00:13:42.724 ] 00:13:42.724 } 00:13:42.986 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:42.986 fio-3.35 00:13:42.986 Starting 1 thread 00:13:48.279 00:13:48.279 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82266: Fri Nov 29 03:01:04 2024 00:13:48.279 read: IOPS=33.9k, BW=132MiB/s (139MB/s)(661MiB/5001msec) 00:13:48.279 slat (usec): min=2, max=115, avg= 3.90, stdev= 2.37 00:13:48.279 clat (usec): min=961, max=3604, avg=1731.37, stdev=307.01 00:13:48.279 lat (usec): min=964, max=3636, avg=1735.27, stdev=307.43 00:13:48.279 clat percentiles (usec): 00:13:48.279 | 1.00th=[ 1188], 5.00th=[ 1303], 10.00th=[ 1369], 20.00th=[ 1467], 00:13:48.279 | 30.00th=[ 1549], 40.00th=[ 1614], 50.00th=[ 1696], 60.00th=[ 1778], 00:13:48.279 | 70.00th=[ 1860], 80.00th=[ 1975], 90.00th=[ 2147], 95.00th=[ 2278], 00:13:48.279 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 2999], 99.95th=[ 3097], 00:13:48.279 | 99.99th=[ 3425] 00:13:48.279 bw ( KiB/s): min=131072, max=140800, per=100.00%, avg=135708.44, stdev=3254.43, samples=9 00:13:48.279 iops : min=32768, max=35200, avg=33927.11, stdev=813.61, samples=9 00:13:48.279 lat (usec) : 1000=0.01% 00:13:48.279 lat (msec) : 2=81.67%, 4=18.32% 00:13:48.279 cpu : usr=35.80%, sys=62.74%, ctx=18, majf=0, minf=1063 00:13:48.279 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:48.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.279 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:48.279 issued rwts: total=169312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.279 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:48.279 00:13:48.279 Run status group 0 (all jobs): 00:13:48.279 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=661MiB (694MB), run=5001-5001msec 00:13:48.851 ----------------------------------------------------- 00:13:48.851 Suppressions used: 00:13:48.851 count bytes template 00:13:48.851 1 11 /usr/src/fio/parse.c 00:13:48.851 1 8 libtcmalloc_minimal.so 00:13:48.851 1 904 libcrypto.so 00:13:48.851 ----------------------------------------------------- 00:13:48.851 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:48.851 03:01:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:48.851 { 00:13:48.851 "subsystems": [ 00:13:48.851 { 00:13:48.851 "subsystem": "bdev", 00:13:48.851 "config": [ 00:13:48.851 { 00:13:48.851 "params": { 00:13:48.851 "io_mechanism": "io_uring_cmd", 00:13:48.851 "conserve_cpu": false, 00:13:48.851 "filename": "/dev/ng0n1", 00:13:48.851 "name": "xnvme_bdev" 00:13:48.851 }, 00:13:48.851 "method": "bdev_xnvme_create" 00:13:48.851 }, 00:13:48.851 { 00:13:48.851 "method": "bdev_wait_for_examine" 00:13:48.851 } 00:13:48.851 ] 00:13:48.851 } 00:13:48.851 ] 00:13:48.851 } 00:13:49.112 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:49.112 fio-3.35 00:13:49.112 Starting 1 thread 00:13:54.403 00:13:54.403 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82345: Fri Nov 29 03:01:10 2024 00:13:54.403 write: IOPS=36.7k, BW=143MiB/s (150MB/s)(716MiB/5002msec); 0 zone resets 00:13:54.403 slat (usec): min=2, max=192, avg= 4.03, stdev= 2.28 00:13:54.403 clat (usec): min=149, max=4840, avg=1584.31, stdev=297.59 00:13:54.403 lat (usec): min=152, max=4844, avg=1588.34, stdev=298.10 00:13:54.403 clat percentiles (usec): 00:13:54.403 | 1.00th=[ 1045], 5.00th=[ 1156], 10.00th=[ 1237], 20.00th=[ 1336], 00:13:54.403 | 30.00th=[ 1418], 40.00th=[ 1483], 50.00th=[ 1565], 60.00th=[ 1631], 00:13:54.403 | 70.00th=[ 1713], 80.00th=[ 1811], 90.00th=[ 1958], 95.00th=[ 2089], 00:13:54.403 | 99.00th=[ 2442], 99.50th=[ 2606], 99.90th=[ 3261], 99.95th=[ 3425], 00:13:54.403 | 99.99th=[ 3818] 00:13:54.403 bw ( KiB/s): min=139192, max=167504, per=100.00%, avg=147186.67, stdev=11230.97, samples=9 00:13:54.403 iops : min=34798, max=41876, avg=36796.67, stdev=2807.74, samples=9 00:13:54.403 lat (usec) : 250=0.01%, 500=0.03%, 750=0.06%, 1000=0.33% 00:13:54.403 lat (msec) : 2=91.84%, 4=7.72%, 10=0.01% 00:13:54.403 cpu : usr=37.19%, sys=61.47%, ctx=8, majf=0, minf=1064 00:13:54.403 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.3%, 16=24.8%, 32=50.6%, >=64=1.6% 00:13:54.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.403 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:54.403 issued rwts: total=0,183368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:54.403 00:13:54.403 Run status group 0 (all jobs): 00:13:54.403 WRITE: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=716MiB (751MB), run=5002-5002msec 00:13:54.975 ----------------------------------------------------- 00:13:54.975 Suppressions used: 00:13:54.975 count bytes template 00:13:54.975 1 11 /usr/src/fio/parse.c 00:13:54.975 1 8 libtcmalloc_minimal.so 00:13:54.975 1 904 libcrypto.so 00:13:54.975 ----------------------------------------------------- 00:13:54.975 00:13:54.975 00:13:54.975 real 0m12.063s 00:13:54.975 user 0m4.856s 00:13:54.975 sys 0m6.751s 00:13:54.975 ************************************ 00:13:54.975 END TEST xnvme_fio_plugin 00:13:54.975 ************************************ 00:13:54.975 03:01:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.975 03:01:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:54.975 03:01:10 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:54.975 03:01:10 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:54.975 03:01:10 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:54.975 03:01:10 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:54.975 03:01:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:54.975 03:01:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.975 03:01:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:54.975 ************************************ 00:13:54.975 START TEST xnvme_rpc 00:13:54.975 ************************************ 00:13:54.975 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:54.975 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:54.975 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:54.975 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:54.975 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:54.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.975 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82425 00:13:54.975 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82425 00:13:54.975 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82425 ']' 00:13:54.975 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.975 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.975 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.975 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.975 03:01:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.975 03:01:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:54.975 [2024-11-29 03:01:10.876023] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:54.975 [2024-11-29 03:01:10.876437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82425 ] 00:13:55.236 [2024-11-29 03:01:11.025222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.236 [2024-11-29 03:01:11.065094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.808 xnvme_bdev 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.808 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82425 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82425 ']' 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82425 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82425 00:13:56.069 killing process with pid 82425 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82425' 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82425 00:13:56.069 03:01:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82425 00:13:56.641 00:13:56.641 real 0m1.617s 00:13:56.641 user 0m1.588s 00:13:56.641 sys 0m0.509s 00:13:56.641 03:01:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.641 ************************************ 00:13:56.641 END TEST xnvme_rpc 00:13:56.641 ************************************ 00:13:56.641 03:01:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.641 03:01:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:56.641 03:01:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:56.642 03:01:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.642 03:01:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.642 ************************************ 00:13:56.642 START TEST xnvme_bdevperf 00:13:56.642 ************************************ 00:13:56.642 03:01:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:56.642 03:01:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:56.642 03:01:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:13:56.642 03:01:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:56.642 03:01:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:56.642 03:01:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:56.642 03:01:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:56.642 03:01:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:56.642 { 00:13:56.642 "subsystems": [ 00:13:56.642 { 00:13:56.642 "subsystem": "bdev", 00:13:56.642 "config": [ 00:13:56.642 { 00:13:56.642 "params": { 00:13:56.642 "io_mechanism": "io_uring_cmd", 00:13:56.642 "conserve_cpu": true, 00:13:56.642 "filename": "/dev/ng0n1", 00:13:56.642 "name": "xnvme_bdev" 00:13:56.642 }, 00:13:56.642 "method": "bdev_xnvme_create" 00:13:56.642 }, 00:13:56.642 { 00:13:56.642 "method": "bdev_wait_for_examine" 00:13:56.642 } 00:13:56.642 ] 00:13:56.642 } 00:13:56.642 ] 00:13:56.642 } 00:13:56.642 [2024-11-29 03:01:12.541339] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:56.642 [2024-11-29 03:01:12.541488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82484 ] 00:13:56.904 [2024-11-29 03:01:12.688572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.904 [2024-11-29 03:01:12.728429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.904 Running I/O for 5 seconds... 00:13:59.232 38400.00 IOPS, 150.00 MiB/s [2024-11-29T03:01:16.165Z] 39232.00 IOPS, 153.25 MiB/s [2024-11-29T03:01:17.107Z] 39274.67 IOPS, 153.42 MiB/s [2024-11-29T03:01:18.046Z] 38959.75 IOPS, 152.19 MiB/s 00:14:02.054 Latency(us) 00:14:02.054 [2024-11-29T03:01:18.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.054 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:02.054 xnvme_bdev : 5.00 38457.77 150.23 0.00 0.00 1660.18 793.99 3982.57 00:14:02.054 [2024-11-29T03:01:18.046Z] =================================================================================================================== 00:14:02.054 [2024-11-29T03:01:18.046Z] Total : 38457.77 150.23 0.00 0.00 1660.18 793.99 3982.57 00:14:02.314 03:01:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:02.314 03:01:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:02.314 03:01:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:02.314 03:01:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:02.314 03:01:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:02.314 { 00:14:02.314 "subsystems": [ 00:14:02.314 { 00:14:02.314 "subsystem": "bdev", 00:14:02.314 "config": [ 00:14:02.314 { 00:14:02.314 "params": { 00:14:02.314 "io_mechanism": "io_uring_cmd", 00:14:02.314 "conserve_cpu": true, 00:14:02.314 "filename": "/dev/ng0n1", 00:14:02.314 "name": "xnvme_bdev" 00:14:02.314 }, 00:14:02.314 "method": "bdev_xnvme_create" 00:14:02.314 }, 00:14:02.314 { 00:14:02.314 "method": "bdev_wait_for_examine" 00:14:02.315 } 00:14:02.315 ] 00:14:02.315 } 00:14:02.315 ] 00:14:02.315 } 00:14:02.315 [2024-11-29 03:01:18.121855] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:02.315 [2024-11-29 03:01:18.122010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82550 ] 00:14:02.315 [2024-11-29 03:01:18.269274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.315 [2024-11-29 03:01:18.297652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.576 Running I/O for 5 seconds... 00:14:04.460 40596.00 IOPS, 158.58 MiB/s [2024-11-29T03:01:21.836Z] 41148.00 IOPS, 160.73 MiB/s [2024-11-29T03:01:22.408Z] 40352.67 IOPS, 157.63 MiB/s [2024-11-29T03:01:23.793Z] 38062.25 IOPS, 148.68 MiB/s [2024-11-29T03:01:23.793Z] 32669.20 IOPS, 127.61 MiB/s 00:14:07.801 Latency(us) 00:14:07.801 [2024-11-29T03:01:23.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.801 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:07.801 xnvme_bdev : 5.01 32599.79 127.34 0.00 0.00 1956.42 81.53 26416.05 00:14:07.801 [2024-11-29T03:01:23.793Z] =================================================================================================================== 00:14:07.801 [2024-11-29T03:01:23.793Z] Total : 32599.79 127.34 0.00 0.00 1956.42 81.53 26416.05 00:14:07.801 03:01:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.801 03:01:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:07.801 03:01:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:07.801 03:01:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:07.801 03:01:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.802 { 00:14:07.802 "subsystems": [ 00:14:07.802 { 00:14:07.802 "subsystem": "bdev", 00:14:07.802 "config": [ 00:14:07.802 { 00:14:07.802 "params": { 00:14:07.802 "io_mechanism": "io_uring_cmd", 00:14:07.802 "conserve_cpu": true, 00:14:07.802 "filename": "/dev/ng0n1", 00:14:07.802 "name": "xnvme_bdev" 00:14:07.802 }, 00:14:07.802 "method": "bdev_xnvme_create" 00:14:07.802 }, 00:14:07.802 { 00:14:07.802 "method": "bdev_wait_for_examine" 00:14:07.802 } 00:14:07.802 ] 00:14:07.802 } 00:14:07.802 ] 00:14:07.802 } 00:14:07.802 [2024-11-29 03:01:23.682359] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:07.802 [2024-11-29 03:01:23.682506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82614 ] 00:14:08.062 [2024-11-29 03:01:23.839666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.062 [2024-11-29 03:01:23.867775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.062 Running I/O for 5 seconds... 00:14:10.383 73344.00 IOPS, 286.50 MiB/s [2024-11-29T03:01:27.330Z] 75840.00 IOPS, 296.25 MiB/s [2024-11-29T03:01:28.272Z] 76565.33 IOPS, 299.08 MiB/s [2024-11-29T03:01:29.213Z] 76816.00 IOPS, 300.06 MiB/s 00:14:13.222 Latency(us) 00:14:13.222 [2024-11-29T03:01:29.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.222 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:13.222 xnvme_bdev : 5.00 76024.63 296.97 0.00 0.00 838.21 494.67 5217.67 00:14:13.222 [2024-11-29T03:01:29.214Z] =================================================================================================================== 00:14:13.222 [2024-11-29T03:01:29.214Z] Total : 76024.63 296.97 0.00 0.00 838.21 494.67 5217.67 00:14:13.482 03:01:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:13.482 03:01:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:13.482 03:01:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:13.482 03:01:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:13.482 03:01:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:13.482 { 00:14:13.482 "subsystems": [ 00:14:13.483 { 00:14:13.483 "subsystem": "bdev", 00:14:13.483 "config": [ 00:14:13.483 { 00:14:13.483 "params": { 00:14:13.483 "io_mechanism": "io_uring_cmd", 00:14:13.483 "conserve_cpu": true, 00:14:13.483 "filename": "/dev/ng0n1", 00:14:13.483 "name": "xnvme_bdev" 00:14:13.483 }, 00:14:13.483 "method": "bdev_xnvme_create" 00:14:13.483 }, 00:14:13.483 { 00:14:13.483 "method": "bdev_wait_for_examine" 00:14:13.483 } 00:14:13.483 ] 00:14:13.483 } 00:14:13.483 ] 00:14:13.483 } 00:14:13.483 [2024-11-29 03:01:29.313527] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:13.483 [2024-11-29 03:01:29.313686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82683 ] 00:14:13.483 [2024-11-29 03:01:29.455894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.744 [2024-11-29 03:01:29.495885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.744 Running I/O for 5 seconds... 00:14:16.075 45497.00 IOPS, 177.72 MiB/s [2024-11-29T03:01:32.673Z] 46474.50 IOPS, 181.54 MiB/s [2024-11-29T03:01:33.722Z] 44692.00 IOPS, 174.58 MiB/s [2024-11-29T03:01:34.662Z] 44555.50 IOPS, 174.04 MiB/s 00:14:18.671 Latency(us) 00:14:18.671 [2024-11-29T03:01:34.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.671 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:18.671 xnvme_bdev : 5.00 43813.07 171.14 0.00 0.00 1455.68 237.88 17946.78 00:14:18.671 [2024-11-29T03:01:34.663Z] =================================================================================================================== 00:14:18.671 [2024-11-29T03:01:34.663Z] Total : 43813.07 171.14 0.00 0.00 1455.68 237.88 17946.78 00:14:18.932 00:14:18.932 real 0m22.432s 00:14:18.932 user 0m14.151s 00:14:18.932 sys 0m5.915s 00:14:18.932 ************************************ 00:14:18.932 END TEST xnvme_bdevperf 00:14:18.932 ************************************ 00:14:18.932 03:01:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.932 03:01:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:19.193 03:01:34 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:19.193 03:01:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:19.193 03:01:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.193 03:01:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:19.193 ************************************ 00:14:19.193 START TEST xnvme_fio_plugin 00:14:19.193 ************************************ 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:19.193 03:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:19.193 03:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:19.193 03:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:19.193 03:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:19.193 03:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:19.194 03:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:19.194 { 00:14:19.194 "subsystems": [ 00:14:19.194 { 00:14:19.194 "subsystem": "bdev", 00:14:19.194 "config": [ 00:14:19.194 { 00:14:19.194 "params": { 00:14:19.194 "io_mechanism": "io_uring_cmd", 00:14:19.194 "conserve_cpu": true, 00:14:19.194 "filename": "/dev/ng0n1", 00:14:19.194 "name": "xnvme_bdev" 00:14:19.194 }, 00:14:19.194 "method": "bdev_xnvme_create" 00:14:19.194 }, 00:14:19.194 { 00:14:19.194 "method": "bdev_wait_for_examine" 00:14:19.194 } 00:14:19.194 ] 00:14:19.194 } 00:14:19.194 ] 00:14:19.194 } 00:14:19.454 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:19.454 fio-3.35 00:14:19.454 Starting 1 thread 00:14:24.748 00:14:24.748 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82785: Fri Nov 29 03:01:40 2024 00:14:24.748 read: IOPS=40.4k, BW=158MiB/s (166MB/s)(789MiB/5001msec) 00:14:24.748 slat (usec): min=2, max=188, avg= 3.42, stdev= 1.57 00:14:24.748 clat (usec): min=850, max=6755, avg=1447.96, stdev=298.68 00:14:24.748 lat (usec): min=853, max=6759, avg=1451.38, stdev=299.05 00:14:24.748 clat percentiles (usec): 00:14:24.748 | 1.00th=[ 996], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1188], 00:14:24.748 | 30.00th=[ 1254], 40.00th=[ 1319], 50.00th=[ 1385], 60.00th=[ 1467], 00:14:24.748 | 70.00th=[ 1565], 80.00th=[ 1680], 90.00th=[ 1860], 95.00th=[ 2024], 00:14:24.748 | 99.00th=[ 2311], 99.50th=[ 2442], 99.90th=[ 2835], 99.95th=[ 2999], 00:14:24.748 | 99.99th=[ 3163] 00:14:24.748 bw ( KiB/s): min=145408, max=179712, per=100.00%, avg=162586.67, stdev=12855.49, samples=9 00:14:24.748 iops : min=36352, max=44928, avg=40646.67, stdev=3213.87, samples=9 00:14:24.748 lat (usec) : 1000=1.19% 00:14:24.748 lat (msec) : 2=93.38%, 4=5.43%, 10=0.01% 00:14:24.748 cpu : usr=70.72%, sys=26.60%, ctx=7, majf=0, minf=1063 00:14:24.748 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:24.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.748 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:24.748 issued rwts: total=202110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:24.748 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:24.748 00:14:24.748 Run status group 0 (all jobs): 00:14:24.748 READ: bw=158MiB/s (166MB/s), 158MiB/s-158MiB/s (166MB/s-166MB/s), io=789MiB (828MB), run=5001-5001msec 00:14:25.320 ----------------------------------------------------- 00:14:25.320 Suppressions used: 00:14:25.320 count bytes template 00:14:25.320 1 11 /usr/src/fio/parse.c 00:14:25.320 1 8 libtcmalloc_minimal.so 00:14:25.320 1 904 libcrypto.so 00:14:25.320 ----------------------------------------------------- 00:14:25.320 00:14:25.320 03:01:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:25.320 03:01:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:25.320 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:25.320 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:25.320 03:01:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:25.320 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:25.320 03:01:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:25.320 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:25.320 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:25.320 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:25.320 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:25.321 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:25.321 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:25.321 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:25.321 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:25.321 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:25.321 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:25.321 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:25.321 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:25.321 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:25.321 03:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:25.321 { 00:14:25.321 "subsystems": [ 00:14:25.321 { 00:14:25.321 "subsystem": "bdev", 00:14:25.321 "config": [ 00:14:25.321 { 00:14:25.321 "params": { 00:14:25.321 "io_mechanism": "io_uring_cmd", 00:14:25.321 "conserve_cpu": true, 00:14:25.321 "filename": "/dev/ng0n1", 00:14:25.321 "name": "xnvme_bdev" 00:14:25.321 }, 00:14:25.321 "method": "bdev_xnvme_create" 00:14:25.321 }, 00:14:25.321 { 00:14:25.321 "method": "bdev_wait_for_examine" 00:14:25.321 } 00:14:25.321 ] 00:14:25.321 } 00:14:25.321 ] 00:14:25.321 } 00:14:25.321 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:25.321 fio-3.35 00:14:25.321 Starting 1 thread 00:14:31.908 00:14:31.908 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82870: Fri Nov 29 03:01:46 2024 00:14:31.908 write: IOPS=38.4k, BW=150MiB/s (157MB/s)(750MiB/5001msec); 0 zone resets 00:14:31.908 slat (usec): min=2, max=367, avg= 4.18, stdev= 2.55 00:14:31.908 clat (usec): min=657, max=5799, avg=1503.70, stdev=299.64 00:14:31.908 lat (usec): min=660, max=5803, avg=1507.87, stdev=300.17 00:14:31.908 clat percentiles (usec): 00:14:31.908 | 1.00th=[ 996], 5.00th=[ 1090], 10.00th=[ 1156], 20.00th=[ 1254], 00:14:31.908 | 30.00th=[ 1336], 40.00th=[ 1401], 50.00th=[ 1483], 60.00th=[ 1549], 00:14:31.908 | 70.00th=[ 1631], 80.00th=[ 1729], 90.00th=[ 1876], 95.00th=[ 2008], 00:14:31.908 | 99.00th=[ 2376], 99.50th=[ 2573], 99.90th=[ 3523], 99.95th=[ 3818], 00:14:31.908 | 99.99th=[ 4555] 00:14:31.908 bw ( KiB/s): min=140056, max=170928, per=98.27%, avg=150810.11, stdev=9298.91, samples=9 00:14:31.908 iops : min=35014, max=42732, avg=37702.44, stdev=2324.77, samples=9 00:14:31.908 lat (usec) : 750=0.02%, 1000=1.08% 00:14:31.908 lat (msec) : 2=93.74%, 4=5.14%, 10=0.03% 00:14:31.908 cpu : usr=56.90%, sys=38.12%, ctx=12, majf=0, minf=1064 00:14:31.908 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.4%, >=64=1.7% 00:14:31.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.909 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:31.909 issued rwts: total=0,191876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.909 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:31.909 00:14:31.909 Run status group 0 (all jobs): 00:14:31.909 WRITE: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=750MiB (786MB), run=5001-5001msec 00:14:31.909 ----------------------------------------------------- 00:14:31.909 Suppressions used: 00:14:31.909 count bytes template 00:14:31.909 1 11 /usr/src/fio/parse.c 00:14:31.909 1 8 libtcmalloc_minimal.so 00:14:31.909 1 904 libcrypto.so 00:14:31.909 ----------------------------------------------------- 00:14:31.909 00:14:31.909 00:14:31.909 real 0m12.039s 00:14:31.909 user 0m7.497s 00:14:31.909 sys 0m3.847s 00:14:31.909 ************************************ 00:14:31.909 END TEST xnvme_fio_plugin 00:14:31.909 ************************************ 00:14:31.909 03:01:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.909 03:01:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:31.909 Process with pid 82425 is not found 00:14:31.909 03:01:47 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 82425 00:14:31.909 03:01:47 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 82425 ']' 00:14:31.909 03:01:47 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 82425 00:14:31.909 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82425) - No such process 00:14:31.909 03:01:47 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 82425 is not found' 00:14:31.909 ************************************ 00:14:31.909 END TEST nvme_xnvme 00:14:31.909 ************************************ 00:14:31.909 00:14:31.909 real 2m57.640s 00:14:31.909 user 1m27.966s 00:14:31.909 sys 1m15.038s 00:14:31.909 03:01:47 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.909 03:01:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.909 03:01:47 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:31.909 03:01:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:31.909 03:01:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.909 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:14:31.909 ************************************ 00:14:31.909 START TEST blockdev_xnvme 00:14:31.909 ************************************ 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:31.909 * Looking for test storage... 00:14:31.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.909 03:01:47 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:31.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.909 --rc genhtml_branch_coverage=1 00:14:31.909 --rc genhtml_function_coverage=1 00:14:31.909 --rc genhtml_legend=1 00:14:31.909 --rc geninfo_all_blocks=1 00:14:31.909 --rc geninfo_unexecuted_blocks=1 00:14:31.909 00:14:31.909 ' 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:31.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.909 --rc genhtml_branch_coverage=1 00:14:31.909 --rc genhtml_function_coverage=1 00:14:31.909 --rc genhtml_legend=1 00:14:31.909 --rc geninfo_all_blocks=1 00:14:31.909 --rc geninfo_unexecuted_blocks=1 00:14:31.909 00:14:31.909 ' 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:31.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.909 --rc genhtml_branch_coverage=1 00:14:31.909 --rc genhtml_function_coverage=1 00:14:31.909 --rc genhtml_legend=1 00:14:31.909 --rc geninfo_all_blocks=1 00:14:31.909 --rc geninfo_unexecuted_blocks=1 00:14:31.909 00:14:31.909 ' 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:31.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.909 --rc genhtml_branch_coverage=1 00:14:31.909 --rc genhtml_function_coverage=1 00:14:31.909 --rc genhtml_legend=1 00:14:31.909 --rc geninfo_all_blocks=1 00:14:31.909 --rc geninfo_unexecuted_blocks=1 00:14:31.909 00:14:31.909 ' 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=82999 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:31.909 03:01:47 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 82999 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 82999 ']' 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.909 03:01:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.909 [2024-11-29 03:01:47.384118] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:31.909 [2024-11-29 03:01:47.384496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82999 ] 00:14:31.909 [2024-11-29 03:01:47.532034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.909 [2024-11-29 03:01:47.561938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.483 03:01:48 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.483 03:01:48 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:14:32.483 03:01:48 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:14:32.483 03:01:48 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:14:32.483 03:01:48 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:32.483 03:01:48 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:32.483 03:01:48 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:32.745 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:33.318 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:14:33.318 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:14:33.318 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:14:33.318 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:33.581 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:33.581 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:14:33.582 nvme0n1 00:14:33.582 nvme0n2 00:14:33.582 nvme0n3 00:14:33.582 nvme1n1 00:14:33.582 nvme2n1 00:14:33.582 nvme3n1 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "0c3d84dc-cdf0-4403-aa22-a23be8e6b5a0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0c3d84dc-cdf0-4403-aa22-a23be8e6b5a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "94fab074-3574-438c-ac20-68ec94d29b82"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "94fab074-3574-438c-ac20-68ec94d29b82",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "f4dab130-9f36-48e5-b68a-21b393d0f2db"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f4dab130-9f36-48e5-b68a-21b393d0f2db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1d67feef-f59a-4f2f-b0a4-881f1c4314be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1d67feef-f59a-4f2f-b0a4-881f1c4314be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ff11ae4c-b27c-42e0-9edc-6e0985e1b607"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ff11ae4c-b27c-42e0-9edc-6e0985e1b607",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "01147c7f-193b-4a85-b6fe-f70297994625"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "01147c7f-193b-4a85-b6fe-f70297994625",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:14:33.582 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 82999 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 82999 ']' 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 82999 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.582 03:01:49 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82999 00:14:33.867 killing process with pid 82999 00:14:33.867 03:01:49 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.867 03:01:49 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.867 03:01:49 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82999' 00:14:33.867 03:01:49 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 82999 00:14:33.867 03:01:49 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 82999 00:14:34.129 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:34.129 03:01:49 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:34.129 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:34.129 03:01:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.129 03:01:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:34.129 ************************************ 00:14:34.129 START TEST bdev_hello_world 00:14:34.129 ************************************ 00:14:34.129 03:01:49 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:34.129 [2024-11-29 03:01:49.952927] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:34.129 [2024-11-29 03:01:49.953063] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83261 ] 00:14:34.129 [2024-11-29 03:01:50.100594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.392 [2024-11-29 03:01:50.130694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.392 [2024-11-29 03:01:50.358930] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:34.392 [2024-11-29 03:01:50.358987] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:34.392 [2024-11-29 03:01:50.359015] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:34.392 [2024-11-29 03:01:50.361282] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:34.392 [2024-11-29 03:01:50.362298] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:34.392 [2024-11-29 03:01:50.362465] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:34.392 [2024-11-29 03:01:50.362955] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:34.392 00:14:34.392 [2024-11-29 03:01:50.363000] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:34.654 ************************************ 00:14:34.654 END TEST bdev_hello_world 00:14:34.654 ************************************ 00:14:34.654 00:14:34.654 real 0m0.666s 00:14:34.654 user 0m0.334s 00:14:34.654 sys 0m0.187s 00:14:34.654 03:01:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.654 03:01:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:34.654 03:01:50 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:14:34.654 03:01:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:34.654 03:01:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.654 03:01:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:34.654 ************************************ 00:14:34.654 START TEST bdev_bounds 00:14:34.654 ************************************ 00:14:34.654 03:01:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:14:34.654 03:01:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=83292 00:14:34.654 Process bdevio pid: 83292 00:14:34.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.654 03:01:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:34.654 03:01:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 83292' 00:14:34.654 03:01:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 83292 00:14:34.654 03:01:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 83292 ']' 00:14:34.654 03:01:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.654 03:01:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.654 03:01:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:34.654 03:01:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.654 03:01:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.654 03:01:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:34.915 [2024-11-29 03:01:50.693104] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:34.915 [2024-11-29 03:01:50.693470] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83292 ] 00:14:34.915 [2024-11-29 03:01:50.841298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:34.915 [2024-11-29 03:01:50.874040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.915 [2024-11-29 03:01:50.874377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.915 [2024-11-29 03:01:50.874424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.859 03:01:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.859 03:01:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:14:35.859 03:01:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:35.859 I/O targets: 00:14:35.859 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:35.859 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:35.859 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:35.859 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:35.859 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:35.859 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:35.859 00:14:35.859 00:14:35.859 CUnit - A unit testing framework for C - Version 2.1-3 00:14:35.859 http://cunit.sourceforge.net/ 00:14:35.859 00:14:35.859 00:14:35.859 Suite: bdevio tests on: nvme3n1 00:14:35.859 Test: blockdev write read block ...passed 00:14:35.859 Test: blockdev write zeroes read block ...passed 00:14:35.859 Test: blockdev write zeroes read no split ...passed 00:14:35.859 Test: blockdev write zeroes read split ...passed 00:14:35.859 Test: blockdev write zeroes read split partial ...passed 00:14:35.859 Test: blockdev reset ...passed 00:14:35.859 Test: blockdev write read 8 blocks ...passed 00:14:35.859 Test: blockdev write read size > 128k ...passed 00:14:35.859 Test: blockdev write read invalid size ...passed 00:14:35.859 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:35.859 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:35.859 Test: blockdev write read max offset ...passed 00:14:35.859 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:35.859 Test: blockdev writev readv 8 blocks ...passed 00:14:35.859 Test: blockdev writev readv 30 x 1block ...passed 00:14:35.859 Test: blockdev writev readv block ...passed 00:14:35.859 Test: blockdev writev readv size > 128k ...passed 00:14:35.859 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:35.859 Test: blockdev comparev and writev ...passed 00:14:35.859 Test: blockdev nvme passthru rw ...passed 00:14:35.859 Test: blockdev nvme passthru vendor specific ...passed 00:14:35.859 Test: blockdev nvme admin passthru ...passed 00:14:35.859 Test: blockdev copy ...passed 00:14:35.859 Suite: bdevio tests on: nvme2n1 00:14:35.859 Test: blockdev write read block ...passed 00:14:35.859 Test: blockdev write zeroes read block ...passed 00:14:35.859 Test: blockdev write zeroes read no split ...passed 00:14:35.859 Test: blockdev write zeroes read split ...passed 00:14:35.859 Test: blockdev write zeroes read split partial ...passed 00:14:35.859 Test: blockdev reset ...passed 00:14:35.859 Test: blockdev write read 8 blocks ...passed 00:14:35.859 Test: blockdev write read size > 128k ...passed 00:14:35.859 Test: blockdev write read invalid size ...passed 00:14:35.859 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:35.859 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:35.859 Test: blockdev write read max offset ...passed 00:14:35.859 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:35.859 Test: blockdev writev readv 8 blocks ...passed 00:14:35.859 Test: blockdev writev readv 30 x 1block ...passed 00:14:35.859 Test: blockdev writev readv block ...passed 00:14:35.859 Test: blockdev writev readv size > 128k ...passed 00:14:35.859 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:35.859 Test: blockdev comparev and writev ...passed 00:14:35.859 Test: blockdev nvme passthru rw ...passed 00:14:35.859 Test: blockdev nvme passthru vendor specific ...passed 00:14:35.859 Test: blockdev nvme admin passthru ...passed 00:14:35.859 Test: blockdev copy ...passed 00:14:35.859 Suite: bdevio tests on: nvme1n1 00:14:35.859 Test: blockdev write read block ...passed 00:14:35.859 Test: blockdev write zeroes read block ...passed 00:14:35.860 Test: blockdev write zeroes read no split ...passed 00:14:35.860 Test: blockdev write zeroes read split ...passed 00:14:35.860 Test: blockdev write zeroes read split partial ...passed 00:14:35.860 Test: blockdev reset ...passed 00:14:35.860 Test: blockdev write read 8 blocks ...passed 00:14:35.860 Test: blockdev write read size > 128k ...passed 00:14:35.860 Test: blockdev write read invalid size ...passed 00:14:35.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:35.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:35.860 Test: blockdev write read max offset ...passed 00:14:35.860 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:35.860 Test: blockdev writev readv 8 blocks ...passed 00:14:35.860 Test: blockdev writev readv 30 x 1block ...passed 00:14:35.860 Test: blockdev writev readv block ...passed 00:14:35.860 Test: blockdev writev readv size > 128k ...passed 00:14:35.860 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:35.860 Test: blockdev comparev and writev ...passed 00:14:35.860 Test: blockdev nvme passthru rw ...passed 00:14:35.860 Test: blockdev nvme passthru vendor specific ...passed 00:14:35.860 Test: blockdev nvme admin passthru ...passed 00:14:35.860 Test: blockdev copy ...passed 00:14:35.860 Suite: bdevio tests on: nvme0n3 00:14:35.860 Test: blockdev write read block ...passed 00:14:35.860 Test: blockdev write zeroes read block ...passed 00:14:35.860 Test: blockdev write zeroes read no split ...passed 00:14:35.860 Test: blockdev write zeroes read split ...passed 00:14:35.860 Test: blockdev write zeroes read split partial ...passed 00:14:35.860 Test: blockdev reset ...passed 00:14:35.860 Test: blockdev write read 8 blocks ...passed 00:14:35.860 Test: blockdev write read size > 128k ...passed 00:14:35.860 Test: blockdev write read invalid size ...passed 00:14:35.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:35.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:35.860 Test: blockdev write read max offset ...passed 00:14:35.860 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:35.860 Test: blockdev writev readv 8 blocks ...passed 00:14:35.860 Test: blockdev writev readv 30 x 1block ...passed 00:14:35.860 Test: blockdev writev readv block ...passed 00:14:35.860 Test: blockdev writev readv size > 128k ...passed 00:14:35.860 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:35.860 Test: blockdev comparev and writev ...passed 00:14:35.860 Test: blockdev nvme passthru rw ...passed 00:14:35.860 Test: blockdev nvme passthru vendor specific ...passed 00:14:35.860 Test: blockdev nvme admin passthru ...passed 00:14:35.860 Test: blockdev copy ...passed 00:14:35.860 Suite: bdevio tests on: nvme0n2 00:14:35.860 Test: blockdev write read block ...passed 00:14:35.860 Test: blockdev write zeroes read block ...passed 00:14:36.121 Test: blockdev write zeroes read no split ...passed 00:14:36.121 Test: blockdev write zeroes read split ...passed 00:14:36.121 Test: blockdev write zeroes read split partial ...passed 00:14:36.121 Test: blockdev reset ...passed 00:14:36.121 Test: blockdev write read 8 blocks ...passed 00:14:36.121 Test: blockdev write read size > 128k ...passed 00:14:36.121 Test: blockdev write read invalid size ...passed 00:14:36.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:36.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:36.121 Test: blockdev write read max offset ...passed 00:14:36.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:36.121 Test: blockdev writev readv 8 blocks ...passed 00:14:36.121 Test: blockdev writev readv 30 x 1block ...passed 00:14:36.121 Test: blockdev writev readv block ...passed 00:14:36.121 Test: blockdev writev readv size > 128k ...passed 00:14:36.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:36.121 Test: blockdev comparev and writev ...passed 00:14:36.121 Test: blockdev nvme passthru rw ...passed 00:14:36.121 Test: blockdev nvme passthru vendor specific ...passed 00:14:36.121 Test: blockdev nvme admin passthru ...passed 00:14:36.121 Test: blockdev copy ...passed 00:14:36.121 Suite: bdevio tests on: nvme0n1 00:14:36.121 Test: blockdev write read block ...passed 00:14:36.121 Test: blockdev write zeroes read block ...passed 00:14:36.121 Test: blockdev write zeroes read no split ...passed 00:14:36.121 Test: blockdev write zeroes read split ...passed 00:14:36.121 Test: blockdev write zeroes read split partial ...passed 00:14:36.121 Test: blockdev reset ...passed 00:14:36.121 Test: blockdev write read 8 blocks ...passed 00:14:36.121 Test: blockdev write read size > 128k ...passed 00:14:36.121 Test: blockdev write read invalid size ...passed 00:14:36.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:36.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:36.121 Test: blockdev write read max offset ...passed 00:14:36.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:36.121 Test: blockdev writev readv 8 blocks ...passed 00:14:36.121 Test: blockdev writev readv 30 x 1block ...passed 00:14:36.121 Test: blockdev writev readv block ...passed 00:14:36.121 Test: blockdev writev readv size > 128k ...passed 00:14:36.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:36.121 Test: blockdev comparev and writev ...passed 00:14:36.121 Test: blockdev nvme passthru rw ...passed 00:14:36.121 Test: blockdev nvme passthru vendor specific ...passed 00:14:36.121 Test: blockdev nvme admin passthru ...passed 00:14:36.121 Test: blockdev copy ...passed 00:14:36.121 00:14:36.121 Run Summary: Type Total Ran Passed Failed Inactive 00:14:36.121 suites 6 6 n/a 0 0 00:14:36.121 tests 138 138 138 0 0 00:14:36.121 asserts 780 780 780 0 n/a 00:14:36.121 00:14:36.121 Elapsed time = 0.609 seconds 00:14:36.121 0 00:14:36.121 03:01:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 83292 00:14:36.122 03:01:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 83292 ']' 00:14:36.122 03:01:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 83292 00:14:36.122 03:01:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:14:36.122 03:01:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:36.122 03:01:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83292 00:14:36.122 killing process with pid 83292 00:14:36.122 03:01:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:36.122 03:01:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:36.122 03:01:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83292' 00:14:36.122 03:01:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 83292 00:14:36.122 03:01:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 83292 00:14:36.383 ************************************ 00:14:36.383 END TEST bdev_bounds 00:14:36.383 ************************************ 00:14:36.383 03:01:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:36.383 00:14:36.383 real 0m1.577s 00:14:36.383 user 0m3.907s 00:14:36.383 sys 0m0.347s 00:14:36.383 03:01:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.383 03:01:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:36.383 03:01:52 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:14:36.383 03:01:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:36.383 03:01:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.383 03:01:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:36.383 ************************************ 00:14:36.383 START TEST bdev_nbd 00:14:36.383 ************************************ 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=83342 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 83342 /var/tmp/spdk-nbd.sock 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 83342 ']' 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:36.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.383 03:01:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:36.383 [2024-11-29 03:01:52.348330] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:36.383 [2024-11-29 03:01:52.348653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.644 [2024-11-29 03:01:52.498840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.644 [2024-11-29 03:01:52.528793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.585 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.585 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:14:37.585 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:14:37.585 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:37.585 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:37.585 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:37.585 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:14:37.585 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:37.585 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:37.585 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:37.585 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.586 1+0 records in 00:14:37.586 1+0 records out 00:14:37.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130145 s, 3.1 MB/s 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:37.586 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.846 1+0 records in 00:14:37.846 1+0 records out 00:14:37.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012638 s, 3.2 MB/s 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:37.846 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:37.847 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:37.847 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:37.847 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.108 1+0 records in 00:14:38.108 1+0 records out 00:14:38.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000920934 s, 4.4 MB/s 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:38.108 03:01:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:38.108 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.369 1+0 records in 00:14:38.369 1+0 records out 00:14:38.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131734 s, 3.1 MB/s 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:38.369 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.720 1+0 records in 00:14:38.720 1+0 records out 00:14:38.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112272 s, 3.6 MB/s 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:38.720 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:39.008 1+0 records in 00:14:39.008 1+0 records out 00:14:39.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134271 s, 3.1 MB/s 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:39.008 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:39.270 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:39.270 { 00:14:39.270 "nbd_device": "/dev/nbd0", 00:14:39.270 "bdev_name": "nvme0n1" 00:14:39.270 }, 00:14:39.270 { 00:14:39.270 "nbd_device": "/dev/nbd1", 00:14:39.270 "bdev_name": "nvme0n2" 00:14:39.270 }, 00:14:39.270 { 00:14:39.270 "nbd_device": "/dev/nbd2", 00:14:39.270 "bdev_name": "nvme0n3" 00:14:39.270 }, 00:14:39.270 { 00:14:39.270 "nbd_device": "/dev/nbd3", 00:14:39.270 "bdev_name": "nvme1n1" 00:14:39.270 }, 00:14:39.270 { 00:14:39.270 "nbd_device": "/dev/nbd4", 00:14:39.270 "bdev_name": "nvme2n1" 00:14:39.270 }, 00:14:39.270 { 00:14:39.270 "nbd_device": "/dev/nbd5", 00:14:39.270 "bdev_name": "nvme3n1" 00:14:39.270 } 00:14:39.270 ]' 00:14:39.270 03:01:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:39.270 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:39.270 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:39.270 { 00:14:39.270 "nbd_device": "/dev/nbd0", 00:14:39.270 "bdev_name": "nvme0n1" 00:14:39.270 }, 00:14:39.270 { 00:14:39.270 "nbd_device": "/dev/nbd1", 00:14:39.270 "bdev_name": "nvme0n2" 00:14:39.270 }, 00:14:39.270 { 00:14:39.270 "nbd_device": "/dev/nbd2", 00:14:39.270 "bdev_name": "nvme0n3" 00:14:39.270 }, 00:14:39.270 { 00:14:39.270 "nbd_device": "/dev/nbd3", 00:14:39.270 "bdev_name": "nvme1n1" 00:14:39.270 }, 00:14:39.270 { 00:14:39.270 "nbd_device": "/dev/nbd4", 00:14:39.270 "bdev_name": "nvme2n1" 00:14:39.270 }, 00:14:39.270 { 00:14:39.270 "nbd_device": "/dev/nbd5", 00:14:39.270 "bdev_name": "nvme3n1" 00:14:39.271 } 00:14:39.271 ]' 00:14:39.271 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:39.271 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:39.271 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:39.271 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.271 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:39.271 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.271 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:39.271 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.532 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:39.793 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:39.793 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:39.793 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:39.793 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.793 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.793 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:39.793 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.793 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.793 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.793 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:40.055 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:40.055 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:40.055 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:40.055 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:40.055 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:40.055 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:40.055 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:40.055 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:40.055 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:40.055 03:01:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:40.317 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:40.317 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:40.317 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:40.317 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:40.317 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:40.317 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:40.317 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:40.317 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:40.317 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:40.317 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:40.578 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:40.578 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:40.578 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:40.578 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:40.578 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:40.578 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:40.578 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:40.578 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:40.578 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:40.578 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.578 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:40.840 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:41.102 /dev/nbd0 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.102 1+0 records in 00:14:41.102 1+0 records out 00:14:41.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101639 s, 4.0 MB/s 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:41.102 03:01:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:14:41.363 /dev/nbd1 00:14:41.363 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:41.363 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:41.363 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:41.363 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:41.363 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:41.363 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:41.363 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:41.363 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:41.363 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:41.364 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:41.364 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.364 1+0 records in 00:14:41.364 1+0 records out 00:14:41.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000832933 s, 4.9 MB/s 00:14:41.364 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.364 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:41.364 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.364 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:41.364 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:41.364 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.364 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:41.364 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:14:41.625 /dev/nbd10 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.625 1+0 records in 00:14:41.625 1+0 records out 00:14:41.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00154137 s, 2.7 MB/s 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:41.625 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:14:41.888 /dev/nbd11 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.888 1+0 records in 00:14:41.888 1+0 records out 00:14:41.888 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010997 s, 3.7 MB/s 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:41.888 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:14:42.151 /dev/nbd12 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:42.151 1+0 records in 00:14:42.151 1+0 records out 00:14:42.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012278 s, 3.3 MB/s 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:42.151 03:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:42.413 /dev/nbd13 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:42.413 1+0 records in 00:14:42.413 1+0 records out 00:14:42.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000867843 s, 4.7 MB/s 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:42.413 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:42.676 { 00:14:42.676 "nbd_device": "/dev/nbd0", 00:14:42.676 "bdev_name": "nvme0n1" 00:14:42.676 }, 00:14:42.676 { 00:14:42.676 "nbd_device": "/dev/nbd1", 00:14:42.676 "bdev_name": "nvme0n2" 00:14:42.676 }, 00:14:42.676 { 00:14:42.676 "nbd_device": "/dev/nbd10", 00:14:42.676 "bdev_name": "nvme0n3" 00:14:42.676 }, 00:14:42.676 { 00:14:42.676 "nbd_device": "/dev/nbd11", 00:14:42.676 "bdev_name": "nvme1n1" 00:14:42.676 }, 00:14:42.676 { 00:14:42.676 "nbd_device": "/dev/nbd12", 00:14:42.676 "bdev_name": "nvme2n1" 00:14:42.676 }, 00:14:42.676 { 00:14:42.676 "nbd_device": "/dev/nbd13", 00:14:42.676 "bdev_name": "nvme3n1" 00:14:42.676 } 00:14:42.676 ]' 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:42.676 { 00:14:42.676 "nbd_device": "/dev/nbd0", 00:14:42.676 "bdev_name": "nvme0n1" 00:14:42.676 }, 00:14:42.676 { 00:14:42.676 "nbd_device": "/dev/nbd1", 00:14:42.676 "bdev_name": "nvme0n2" 00:14:42.676 }, 00:14:42.676 { 00:14:42.676 "nbd_device": "/dev/nbd10", 00:14:42.676 "bdev_name": "nvme0n3" 00:14:42.676 }, 00:14:42.676 { 00:14:42.676 "nbd_device": "/dev/nbd11", 00:14:42.676 "bdev_name": "nvme1n1" 00:14:42.676 }, 00:14:42.676 { 00:14:42.676 "nbd_device": "/dev/nbd12", 00:14:42.676 "bdev_name": "nvme2n1" 00:14:42.676 }, 00:14:42.676 { 00:14:42.676 "nbd_device": "/dev/nbd13", 00:14:42.676 "bdev_name": "nvme3n1" 00:14:42.676 } 00:14:42.676 ]' 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:42.676 /dev/nbd1 00:14:42.676 /dev/nbd10 00:14:42.676 /dev/nbd11 00:14:42.676 /dev/nbd12 00:14:42.676 /dev/nbd13' 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:42.676 /dev/nbd1 00:14:42.676 /dev/nbd10 00:14:42.676 /dev/nbd11 00:14:42.676 /dev/nbd12 00:14:42.676 /dev/nbd13' 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:42.676 256+0 records in 00:14:42.676 256+0 records out 00:14:42.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00489578 s, 214 MB/s 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:42.676 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:42.938 256+0 records in 00:14:42.938 256+0 records out 00:14:42.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244023 s, 4.3 MB/s 00:14:42.938 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:42.938 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:43.200 256+0 records in 00:14:43.200 256+0 records out 00:14:43.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.235839 s, 4.4 MB/s 00:14:43.200 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:43.200 03:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:43.200 256+0 records in 00:14:43.200 256+0 records out 00:14:43.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.195463 s, 5.4 MB/s 00:14:43.200 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:43.200 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:43.462 256+0 records in 00:14:43.462 256+0 records out 00:14:43.462 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.248494 s, 4.2 MB/s 00:14:43.462 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:43.462 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:44.036 256+0 records in 00:14:44.036 256+0 records out 00:14:44.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.314433 s, 3.3 MB/s 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:44.036 256+0 records in 00:14:44.036 256+0 records out 00:14:44.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.230204 s, 4.6 MB/s 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:44.036 03:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:44.036 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:44.036 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:44.036 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:44.036 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:44.036 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:44.036 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:44.036 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:44.036 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:44.036 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.036 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:44.298 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:44.298 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:44.298 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:44.298 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.298 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.298 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:44.298 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:44.298 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.298 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.298 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:44.559 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:44.559 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:44.559 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:44.559 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.559 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.559 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:44.559 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:44.559 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.559 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.559 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:44.819 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:44.819 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:44.819 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:44.819 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.819 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.819 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:44.819 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:44.819 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.819 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.819 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:45.077 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:45.077 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:45.077 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:45.077 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.077 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.077 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:45.077 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:45.077 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.077 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.077 03:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:45.336 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:45.336 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:45.336 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:45.336 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.336 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.336 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:45.336 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:45.336 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.336 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.336 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:45.595 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:45.595 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:45.595 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:45.595 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.595 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.595 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:45.595 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:45.595 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.595 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:45.595 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:45.595 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:45.595 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:45.595 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:45.595 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:45.595 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:45.853 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:45.853 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:45.853 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:45.853 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:45.853 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:45.853 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:45.853 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:45.853 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:45.853 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:45.853 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:45.853 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:45.853 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:45.853 malloc_lvol_verify 00:14:45.853 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:46.111 0aadcff0-e132-4e8b-94b8-1c713638d294 00:14:46.111 03:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:46.370 dff35ad1-0db9-42c4-886a-f35b8bebe19c 00:14:46.370 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:46.629 /dev/nbd0 00:14:46.629 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:14:46.629 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:14:46.629 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:14:46.629 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:14:46.629 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:14:46.629 mke2fs 1.47.0 (5-Feb-2023) 00:14:46.629 Discarding device blocks: 0/4096 done 00:14:46.629 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:46.629 00:14:46.629 Allocating group tables: 0/1 done 00:14:46.629 Writing inode tables: 0/1 done 00:14:46.629 Creating journal (1024 blocks): done 00:14:46.629 Writing superblocks and filesystem accounting information: 0/1 done 00:14:46.629 00:14:46.629 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:46.629 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:46.629 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:46.629 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.629 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:46.629 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.629 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 83342 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 83342 ']' 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 83342 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83342 00:14:46.890 killing process with pid 83342 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83342' 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 83342 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 83342 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:46.890 00:14:46.890 real 0m10.530s 00:14:46.890 user 0m14.303s 00:14:46.890 sys 0m3.842s 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.890 03:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:46.890 ************************************ 00:14:46.890 END TEST bdev_nbd 00:14:46.890 ************************************ 00:14:46.890 03:02:02 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:14:46.890 03:02:02 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:14:46.890 03:02:02 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:14:46.890 03:02:02 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:14:46.891 03:02:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:46.891 03:02:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.891 03:02:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:46.891 ************************************ 00:14:46.891 START TEST bdev_fio 00:14:46.891 ************************************ 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:46.891 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:14:46.891 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.152 03:02:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:47.152 ************************************ 00:14:47.153 START TEST bdev_fio_rw_verify 00:14:47.153 ************************************ 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:47.153 03:02:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:47.153 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:47.153 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:47.153 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:47.153 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:47.153 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:47.153 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:47.153 fio-3.35 00:14:47.153 Starting 6 threads 00:14:59.384 00:14:59.384 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=83743: Fri Nov 29 03:02:13 2024 00:14:59.384 read: IOPS=17.4k, BW=67.9MiB/s (71.2MB/s)(679MiB/10002msec) 00:14:59.384 slat (usec): min=2, max=2317, avg= 6.84, stdev=16.22 00:14:59.384 clat (usec): min=83, max=6814, avg=1126.00, stdev=746.54 00:14:59.384 lat (usec): min=87, max=6823, avg=1132.83, stdev=747.39 00:14:59.384 clat percentiles (usec): 00:14:59.384 | 50.000th=[ 1020], 99.000th=[ 3425], 99.900th=[ 4883], 99.990th=[ 6128], 00:14:59.384 | 99.999th=[ 6783] 00:14:59.384 write: IOPS=17.6k, BW=68.8MiB/s (72.1MB/s)(688MiB/10002msec); 0 zone resets 00:14:59.384 slat (usec): min=13, max=5329, avg=37.50, stdev=121.19 00:14:59.384 clat (usec): min=71, max=8826, avg=1302.67, stdev=798.90 00:14:59.384 lat (usec): min=91, max=8879, avg=1340.17, stdev=811.33 00:14:59.384 clat percentiles (usec): 00:14:59.384 | 50.000th=[ 1188], 99.000th=[ 3752], 99.900th=[ 5145], 99.990th=[ 6325], 00:14:59.384 | 99.999th=[ 8848] 00:14:59.384 bw ( KiB/s): min=49093, max=104536, per=100.00%, avg=71320.37, stdev=2934.74, samples=114 00:14:59.384 iops : min=12272, max=26134, avg=17829.21, stdev=733.69, samples=114 00:14:59.384 lat (usec) : 100=0.02%, 250=5.86%, 500=13.57%, 750=12.93%, 1000=12.48% 00:14:59.384 lat (msec) : 2=40.59%, 4=14.03%, 10=0.51% 00:14:59.384 cpu : usr=42.90%, sys=32.84%, ctx=6190, majf=0, minf=18460 00:14:59.384 IO depths : 1=11.5%, 2=23.9%, 4=51.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:59.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.384 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.384 issued rwts: total=173783,176127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.384 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:59.384 00:14:59.384 Run status group 0 (all jobs): 00:14:59.384 READ: bw=67.9MiB/s (71.2MB/s), 67.9MiB/s-67.9MiB/s (71.2MB/s-71.2MB/s), io=679MiB (712MB), run=10002-10002msec 00:14:59.384 WRITE: bw=68.8MiB/s (72.1MB/s), 68.8MiB/s-68.8MiB/s (72.1MB/s-72.1MB/s), io=688MiB (721MB), run=10002-10002msec 00:14:59.384 ----------------------------------------------------- 00:14:59.384 Suppressions used: 00:14:59.384 count bytes template 00:14:59.384 6 48 /usr/src/fio/parse.c 00:14:59.384 2234 214464 /usr/src/fio/iolog.c 00:14:59.384 1 8 libtcmalloc_minimal.so 00:14:59.384 1 904 libcrypto.so 00:14:59.384 ----------------------------------------------------- 00:14:59.384 00:14:59.384 00:14:59.384 real 0m11.170s 00:14:59.384 user 0m26.502s 00:14:59.384 sys 0m20.007s 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.384 ************************************ 00:14:59.384 END TEST bdev_fio_rw_verify 00:14:59.384 ************************************ 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:14:59.384 03:02:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:59.385 03:02:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "0c3d84dc-cdf0-4403-aa22-a23be8e6b5a0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0c3d84dc-cdf0-4403-aa22-a23be8e6b5a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "94fab074-3574-438c-ac20-68ec94d29b82"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "94fab074-3574-438c-ac20-68ec94d29b82",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "f4dab130-9f36-48e5-b68a-21b393d0f2db"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f4dab130-9f36-48e5-b68a-21b393d0f2db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1d67feef-f59a-4f2f-b0a4-881f1c4314be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1d67feef-f59a-4f2f-b0a4-881f1c4314be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ff11ae4c-b27c-42e0-9edc-6e0985e1b607"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ff11ae4c-b27c-42e0-9edc-6e0985e1b607",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "01147c7f-193b-4a85-b6fe-f70297994625"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "01147c7f-193b-4a85-b6fe-f70297994625",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:59.385 03:02:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:14:59.385 03:02:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:59.385 /home/vagrant/spdk_repo/spdk 00:14:59.385 03:02:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:14:59.385 03:02:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:14:59.385 ************************************ 00:14:59.385 END TEST bdev_fio 00:14:59.385 ************************************ 00:14:59.385 03:02:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:14:59.385 00:14:59.385 real 0m11.341s 00:14:59.385 user 0m26.580s 00:14:59.385 sys 0m20.079s 00:14:59.385 03:02:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.385 03:02:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:59.385 03:02:14 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:59.385 03:02:14 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:59.385 03:02:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:14:59.385 03:02:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.385 03:02:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.385 ************************************ 00:14:59.385 START TEST bdev_verify 00:14:59.385 ************************************ 00:14:59.385 03:02:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:59.385 [2024-11-29 03:02:14.353291] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:59.385 [2024-11-29 03:02:14.353856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83910 ] 00:14:59.385 [2024-11-29 03:02:14.495643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:59.385 [2024-11-29 03:02:14.539366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.385 [2024-11-29 03:02:14.539536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.385 Running I/O for 5 seconds... 00:15:01.272 23328.00 IOPS, 91.12 MiB/s [2024-11-29T03:02:18.209Z] 24144.00 IOPS, 94.31 MiB/s [2024-11-29T03:02:19.155Z] 23616.00 IOPS, 92.25 MiB/s [2024-11-29T03:02:20.099Z] 23680.00 IOPS, 92.50 MiB/s [2024-11-29T03:02:20.099Z] 23481.60 IOPS, 91.73 MiB/s 00:15:04.107 Latency(us) 00:15:04.107 [2024-11-29T03:02:20.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.107 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:04.107 Verification LBA range: start 0x0 length 0x80000 00:15:04.107 nvme0n1 : 5.05 1875.63 7.33 0.00 0.00 68129.02 13308.85 57671.68 00:15:04.107 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:04.107 Verification LBA range: start 0x80000 length 0x80000 00:15:04.107 nvme0n1 : 5.02 1837.68 7.18 0.00 0.00 69512.73 7864.32 75416.81 00:15:04.107 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:04.107 Verification LBA range: start 0x0 length 0x80000 00:15:04.107 nvme0n2 : 5.05 1874.95 7.32 0.00 0.00 68038.71 9175.04 61704.66 00:15:04.107 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:04.107 Verification LBA range: start 0x80000 length 0x80000 00:15:04.107 nvme0n2 : 5.04 1829.80 7.15 0.00 0.00 69636.07 9578.34 63317.86 00:15:04.107 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:04.107 Verification LBA range: start 0x0 length 0x80000 00:15:04.107 nvme0n3 : 5.08 1891.16 7.39 0.00 0.00 67343.15 10435.35 72190.42 00:15:04.107 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:04.107 Verification LBA range: start 0x80000 length 0x80000 00:15:04.107 nvme0n3 : 5.09 1836.62 7.17 0.00 0.00 69217.18 8822.15 75416.81 00:15:04.107 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:04.107 Verification LBA range: start 0x0 length 0x20000 00:15:04.107 nvme1n1 : 5.08 1890.51 7.38 0.00 0.00 67255.06 11746.07 64124.46 00:15:04.107 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:04.107 Verification LBA range: start 0x20000 length 0x20000 00:15:04.107 nvme1n1 : 5.09 1834.57 7.17 0.00 0.00 69128.03 9376.69 65334.35 00:15:04.107 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:04.107 Verification LBA range: start 0x0 length 0xbd0bd 00:15:04.107 nvme2n1 : 5.07 2573.52 10.05 0.00 0.00 49241.41 3780.92 58881.58 00:15:04.107 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:04.107 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:04.107 nvme2n1 : 5.10 2440.83 9.53 0.00 0.00 51839.61 5973.86 70577.23 00:15:04.107 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:04.107 Verification LBA range: start 0x0 length 0xa0000 00:15:04.107 nvme3n1 : 5.08 1864.34 7.28 0.00 0.00 67967.64 4789.17 70577.23 00:15:04.107 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:04.107 Verification LBA range: start 0xa0000 length 0xa0000 00:15:04.107 nvme3n1 : 5.09 1507.36 5.89 0.00 0.00 83821.27 6755.25 102034.51 00:15:04.107 [2024-11-29T03:02:20.099Z] =================================================================================================================== 00:15:04.107 [2024-11-29T03:02:20.099Z] Total : 23256.99 90.85 0.00 0.00 65576.23 3780.92 102034.51 00:15:04.367 00:15:04.367 real 0m6.004s 00:15:04.367 user 0m9.451s 00:15:04.367 sys 0m1.657s 00:15:04.367 03:02:20 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.367 03:02:20 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:04.367 ************************************ 00:15:04.367 END TEST bdev_verify 00:15:04.367 ************************************ 00:15:04.367 03:02:20 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:04.367 03:02:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:04.367 03:02:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.367 03:02:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:04.627 ************************************ 00:15:04.627 START TEST bdev_verify_big_io 00:15:04.627 ************************************ 00:15:04.627 03:02:20 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:04.627 [2024-11-29 03:02:20.432154] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:04.627 [2024-11-29 03:02:20.432517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84004 ] 00:15:04.627 [2024-11-29 03:02:20.580749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:04.887 [2024-11-29 03:02:20.621214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.887 [2024-11-29 03:02:20.621262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.147 Running I/O for 5 seconds... 00:15:10.994 2013.00 IOPS, 125.81 MiB/s [2024-11-29T03:02:26.986Z] 2561.50 IOPS, 160.09 MiB/s [2024-11-29T03:02:27.559Z] 2658.67 IOPS, 166.17 MiB/s 00:15:11.567 Latency(us) 00:15:11.567 [2024-11-29T03:02:27.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.567 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.567 Verification LBA range: start 0x0 length 0x8000 00:15:11.567 nvme0n1 : 5.63 147.78 9.24 0.00 0.00 826077.89 117763.15 813049.70 00:15:11.567 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.567 Verification LBA range: start 0x8000 length 0x8000 00:15:11.567 nvme0n1 : 5.90 86.84 5.43 0.00 0.00 1417657.11 116149.96 1561571.64 00:15:11.567 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.567 Verification LBA range: start 0x0 length 0x8000 00:15:11.567 nvme0n2 : 5.71 131.79 8.24 0.00 0.00 931242.96 70980.53 1987454.82 00:15:11.567 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.567 Verification LBA range: start 0x8000 length 0x8000 00:15:11.567 nvme0n2 : 5.90 97.65 6.10 0.00 0.00 1190190.41 5923.45 1729343.80 00:15:11.567 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.567 Verification LBA range: start 0x0 length 0x8000 00:15:11.567 nvme0n3 : 5.70 120.73 7.55 0.00 0.00 983025.93 147607.24 1729343.80 00:15:11.567 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.567 Verification LBA range: start 0x8000 length 0x8000 00:15:11.567 nvme0n3 : 5.90 62.37 3.90 0.00 0.00 1766374.93 84692.68 3110237.74 00:15:11.567 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.567 Verification LBA range: start 0x0 length 0x2000 00:15:11.567 nvme1n1 : 5.71 172.33 10.77 0.00 0.00 677889.24 5494.94 796917.76 00:15:11.567 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.567 Verification LBA range: start 0x2000 length 0x2000 00:15:11.567 nvme1n1 : 5.99 107.34 6.71 0.00 0.00 991553.02 29844.09 1013085.74 00:15:11.567 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.567 Verification LBA range: start 0x0 length 0xbd0b 00:15:11.567 nvme2n1 : 5.72 234.95 14.68 0.00 0.00 485927.52 11947.72 642051.15 00:15:11.567 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.567 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:11.567 nvme2n1 : 6.20 201.31 12.58 0.00 0.00 510535.04 6276.33 1103424.59 00:15:11.567 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.567 Verification LBA range: start 0x0 length 0xa000 00:15:11.567 nvme3n1 : 5.72 156.63 9.79 0.00 0.00 709177.12 5520.15 838860.80 00:15:11.567 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.567 Verification LBA range: start 0xa000 length 0xa000 00:15:11.567 nvme3n1 : 6.36 215.63 13.48 0.00 0.00 456323.67 567.14 2826315.62 00:15:11.567 [2024-11-29T03:02:27.559Z] =================================================================================================================== 00:15:11.567 [2024-11-29T03:02:27.559Z] Total : 1735.35 108.46 0.00 0.00 781396.93 567.14 3110237.74 00:15:11.829 00:15:11.829 real 0m7.285s 00:15:11.829 user 0m13.294s 00:15:11.829 sys 0m0.522s 00:15:11.829 ************************************ 00:15:11.829 END TEST bdev_verify_big_io 00:15:11.829 ************************************ 00:15:11.829 03:02:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:11.829 03:02:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.829 03:02:27 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:11.829 03:02:27 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:11.829 03:02:27 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:11.829 03:02:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:11.829 ************************************ 00:15:11.829 START TEST bdev_write_zeroes 00:15:11.829 ************************************ 00:15:11.829 03:02:27 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:11.829 [2024-11-29 03:02:27.786494] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:11.829 [2024-11-29 03:02:27.786853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84109 ] 00:15:12.090 [2024-11-29 03:02:27.932287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.090 [2024-11-29 03:02:27.968992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.352 Running I/O for 1 seconds... 00:15:13.738 74080.00 IOPS, 289.38 MiB/s 00:15:13.738 Latency(us) 00:15:13.738 [2024-11-29T03:02:29.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.738 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.738 nvme0n1 : 1.02 12241.50 47.82 0.00 0.00 10444.72 8116.38 23290.49 00:15:13.738 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.738 nvme0n2 : 1.01 12294.30 48.02 0.00 0.00 10389.72 8166.79 21677.29 00:15:13.738 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.738 nvme0n3 : 1.01 12260.91 47.89 0.00 0.00 10408.01 8166.79 20265.75 00:15:13.738 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.738 nvme1n1 : 1.02 12227.78 47.76 0.00 0.00 10426.73 8166.79 20568.22 00:15:13.738 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.738 nvme2n1 : 1.03 12445.32 48.61 0.00 0.00 10222.62 4889.99 21979.77 00:15:13.738 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:13.738 nvme3n1 : 1.03 12193.84 47.63 0.00 0.00 10355.13 6175.51 23996.26 00:15:13.738 [2024-11-29T03:02:29.730Z] =================================================================================================================== 00:15:13.738 [2024-11-29T03:02:29.730Z] Total : 73663.65 287.75 0.00 0.00 10373.86 4889.99 23996.26 00:15:13.738 00:15:13.738 real 0m1.865s 00:15:13.738 user 0m1.151s 00:15:13.738 sys 0m0.517s 00:15:13.738 03:02:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.738 ************************************ 00:15:13.738 END TEST bdev_write_zeroes 00:15:13.738 03:02:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:13.738 ************************************ 00:15:13.738 03:02:29 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:13.738 03:02:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:13.738 03:02:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.738 03:02:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.738 ************************************ 00:15:13.738 START TEST bdev_json_nonenclosed 00:15:13.738 ************************************ 00:15:13.738 03:02:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:13.999 [2024-11-29 03:02:29.733055] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:13.999 [2024-11-29 03:02:29.733225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84147 ] 00:15:13.999 [2024-11-29 03:02:29.879411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.999 [2024-11-29 03:02:29.919433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.999 [2024-11-29 03:02:29.919555] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:13.999 [2024-11-29 03:02:29.919577] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:13.999 [2024-11-29 03:02:29.919595] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:14.259 ************************************ 00:15:14.259 END TEST bdev_json_nonenclosed 00:15:14.259 ************************************ 00:15:14.259 00:15:14.259 real 0m0.347s 00:15:14.259 user 0m0.139s 00:15:14.259 sys 0m0.104s 00:15:14.259 03:02:30 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.259 03:02:30 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:14.259 03:02:30 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:14.259 03:02:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:14.259 03:02:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.259 03:02:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.259 ************************************ 00:15:14.259 START TEST bdev_json_nonarray 00:15:14.259 ************************************ 00:15:14.259 03:02:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:14.259 [2024-11-29 03:02:30.147398] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:14.259 [2024-11-29 03:02:30.147540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84172 ] 00:15:14.520 [2024-11-29 03:02:30.296103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.520 [2024-11-29 03:02:30.336254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.520 [2024-11-29 03:02:30.336397] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:14.520 [2024-11-29 03:02:30.336417] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:14.520 [2024-11-29 03:02:30.336434] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:14.520 00:15:14.520 real 0m0.347s 00:15:14.520 user 0m0.133s 00:15:14.520 sys 0m0.109s 00:15:14.520 03:02:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.520 ************************************ 00:15:14.520 END TEST bdev_json_nonarray 00:15:14.520 ************************************ 00:15:14.520 03:02:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:14.520 03:02:30 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:15:14.520 03:02:30 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:15:14.520 03:02:30 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:15:14.520 03:02:30 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:14.520 03:02:30 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:15:14.520 03:02:30 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:14.520 03:02:30 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:14.520 03:02:30 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:14.520 03:02:30 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:14.520 03:02:30 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:14.520 03:02:30 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:14.520 03:02:30 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:15.093 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:23.241 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:23.241 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:23.241 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:23.241 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:23.241 ************************************ 00:15:23.241 END TEST blockdev_xnvme 00:15:23.241 ************************************ 00:15:23.241 00:15:23.241 real 0m51.130s 00:15:23.241 user 1m13.168s 00:15:23.241 sys 0m44.495s 00:15:23.241 03:02:38 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.241 03:02:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:23.241 03:02:38 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:23.241 03:02:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:23.241 03:02:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.241 03:02:38 -- common/autotest_common.sh@10 -- # set +x 00:15:23.241 ************************************ 00:15:23.241 START TEST ublk 00:15:23.241 ************************************ 00:15:23.241 03:02:38 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:23.241 * Looking for test storage... 00:15:23.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:23.241 03:02:38 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:23.241 03:02:38 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:15:23.241 03:02:38 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:23.241 03:02:38 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:23.241 03:02:38 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.241 03:02:38 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.241 03:02:38 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.241 03:02:38 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.241 03:02:38 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.241 03:02:38 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.241 03:02:38 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.241 03:02:38 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.241 03:02:38 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.241 03:02:38 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.241 03:02:38 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.241 03:02:38 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:23.241 03:02:38 ublk -- scripts/common.sh@345 -- # : 1 00:15:23.241 03:02:38 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.241 03:02:38 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.241 03:02:38 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:23.241 03:02:38 ublk -- scripts/common.sh@353 -- # local d=1 00:15:23.241 03:02:38 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.241 03:02:38 ublk -- scripts/common.sh@355 -- # echo 1 00:15:23.241 03:02:38 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.241 03:02:38 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:23.241 03:02:38 ublk -- scripts/common.sh@353 -- # local d=2 00:15:23.241 03:02:38 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.241 03:02:38 ublk -- scripts/common.sh@355 -- # echo 2 00:15:23.241 03:02:38 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.241 03:02:38 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.241 03:02:38 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.241 03:02:38 ublk -- scripts/common.sh@368 -- # return 0 00:15:23.241 03:02:38 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.241 03:02:38 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:23.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.241 --rc genhtml_branch_coverage=1 00:15:23.241 --rc genhtml_function_coverage=1 00:15:23.241 --rc genhtml_legend=1 00:15:23.241 --rc geninfo_all_blocks=1 00:15:23.241 --rc geninfo_unexecuted_blocks=1 00:15:23.241 00:15:23.241 ' 00:15:23.241 03:02:38 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:23.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.241 --rc genhtml_branch_coverage=1 00:15:23.241 --rc genhtml_function_coverage=1 00:15:23.241 --rc genhtml_legend=1 00:15:23.241 --rc geninfo_all_blocks=1 00:15:23.241 --rc geninfo_unexecuted_blocks=1 00:15:23.241 00:15:23.241 ' 00:15:23.241 03:02:38 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:23.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.241 --rc genhtml_branch_coverage=1 00:15:23.241 --rc genhtml_function_coverage=1 00:15:23.241 --rc genhtml_legend=1 00:15:23.241 --rc geninfo_all_blocks=1 00:15:23.241 --rc geninfo_unexecuted_blocks=1 00:15:23.241 00:15:23.241 ' 00:15:23.241 03:02:38 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:23.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.241 --rc genhtml_branch_coverage=1 00:15:23.241 --rc genhtml_function_coverage=1 00:15:23.241 --rc genhtml_legend=1 00:15:23.241 --rc geninfo_all_blocks=1 00:15:23.241 --rc geninfo_unexecuted_blocks=1 00:15:23.241 00:15:23.241 ' 00:15:23.241 03:02:38 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:23.241 03:02:38 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:23.241 03:02:38 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:23.241 03:02:38 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:23.242 03:02:38 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:23.242 03:02:38 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:23.242 03:02:38 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:23.242 03:02:38 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:23.242 03:02:38 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:23.242 03:02:38 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:23.242 03:02:38 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:23.242 03:02:38 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:23.242 03:02:38 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:23.242 03:02:38 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:23.242 03:02:38 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:23.242 03:02:38 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:23.242 03:02:38 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:23.242 03:02:38 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:23.242 03:02:38 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:23.242 03:02:38 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:23.242 03:02:38 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:23.242 03:02:38 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.242 03:02:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:23.242 ************************************ 00:15:23.242 START TEST test_save_ublk_config 00:15:23.242 ************************************ 00:15:23.242 03:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:15:23.242 03:02:38 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:23.242 03:02:38 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=84464 00:15:23.242 03:02:38 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:23.242 03:02:38 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:23.242 03:02:38 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 84464 00:15:23.242 03:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 84464 ']' 00:15:23.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.242 03:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.242 03:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.242 03:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.242 03:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.242 03:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:23.242 [2024-11-29 03:02:38.622990] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:23.242 [2024-11-29 03:02:38.623359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84464 ] 00:15:23.242 [2024-11-29 03:02:38.763705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.242 [2024-11-29 03:02:38.805790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.503 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.503 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:15:23.503 03:02:39 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:23.503 03:02:39 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:23.503 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.503 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:23.503 [2024-11-29 03:02:39.487856] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:23.503 [2024-11-29 03:02:39.489030] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:23.764 malloc0 00:15:23.764 [2024-11-29 03:02:39.527977] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:23.764 [2024-11-29 03:02:39.528074] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:23.764 [2024-11-29 03:02:39.528084] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:23.764 [2024-11-29 03:02:39.528101] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:23.764 [2024-11-29 03:02:39.536982] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:23.764 [2024-11-29 03:02:39.537033] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:23.764 [2024-11-29 03:02:39.543865] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:23.764 [2024-11-29 03:02:39.544014] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:23.764 [2024-11-29 03:02:39.560859] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:23.764 0 00:15:23.764 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.764 03:02:39 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:23.764 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.764 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:24.026 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.026 03:02:39 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:24.026 "subsystems": [ 00:15:24.026 { 00:15:24.026 "subsystem": "fsdev", 00:15:24.026 "config": [ 00:15:24.026 { 00:15:24.026 "method": "fsdev_set_opts", 00:15:24.026 "params": { 00:15:24.026 "fsdev_io_pool_size": 65535, 00:15:24.026 "fsdev_io_cache_size": 256 00:15:24.026 } 00:15:24.026 } 00:15:24.026 ] 00:15:24.026 }, 00:15:24.026 { 00:15:24.026 "subsystem": "keyring", 00:15:24.026 "config": [] 00:15:24.026 }, 00:15:24.026 { 00:15:24.026 "subsystem": "iobuf", 00:15:24.026 "config": [ 00:15:24.026 { 00:15:24.026 "method": "iobuf_set_options", 00:15:24.026 "params": { 00:15:24.026 "small_pool_count": 8192, 00:15:24.026 "large_pool_count": 1024, 00:15:24.026 "small_bufsize": 8192, 00:15:24.026 "large_bufsize": 135168, 00:15:24.026 "enable_numa": false 00:15:24.026 } 00:15:24.026 } 00:15:24.026 ] 00:15:24.026 }, 00:15:24.026 { 00:15:24.026 "subsystem": "sock", 00:15:24.026 "config": [ 00:15:24.026 { 00:15:24.026 "method": "sock_set_default_impl", 00:15:24.026 "params": { 00:15:24.026 "impl_name": "posix" 00:15:24.026 } 00:15:24.026 }, 00:15:24.026 { 00:15:24.026 "method": "sock_impl_set_options", 00:15:24.026 "params": { 00:15:24.026 "impl_name": "ssl", 00:15:24.026 "recv_buf_size": 4096, 00:15:24.026 "send_buf_size": 4096, 00:15:24.026 "enable_recv_pipe": true, 00:15:24.026 "enable_quickack": false, 00:15:24.026 "enable_placement_id": 0, 00:15:24.026 "enable_zerocopy_send_server": true, 00:15:24.026 "enable_zerocopy_send_client": false, 00:15:24.026 "zerocopy_threshold": 0, 00:15:24.026 "tls_version": 0, 00:15:24.026 "enable_ktls": false 00:15:24.026 } 00:15:24.026 }, 00:15:24.026 { 00:15:24.026 "method": "sock_impl_set_options", 00:15:24.026 "params": { 00:15:24.026 "impl_name": "posix", 00:15:24.026 "recv_buf_size": 2097152, 00:15:24.026 "send_buf_size": 2097152, 00:15:24.026 "enable_recv_pipe": true, 00:15:24.026 "enable_quickack": false, 00:15:24.026 "enable_placement_id": 0, 00:15:24.026 "enable_zerocopy_send_server": true, 00:15:24.026 "enable_zerocopy_send_client": false, 00:15:24.026 "zerocopy_threshold": 0, 00:15:24.026 "tls_version": 0, 00:15:24.026 "enable_ktls": false 00:15:24.026 } 00:15:24.026 } 00:15:24.026 ] 00:15:24.026 }, 00:15:24.026 { 00:15:24.026 "subsystem": "vmd", 00:15:24.026 "config": [] 00:15:24.026 }, 00:15:24.026 { 00:15:24.026 "subsystem": "accel", 00:15:24.026 "config": [ 00:15:24.026 { 00:15:24.026 "method": "accel_set_options", 00:15:24.026 "params": { 00:15:24.026 "small_cache_size": 128, 00:15:24.026 "large_cache_size": 16, 00:15:24.026 "task_count": 2048, 00:15:24.026 "sequence_count": 2048, 00:15:24.026 "buf_count": 2048 00:15:24.026 } 00:15:24.026 } 00:15:24.026 ] 00:15:24.026 }, 00:15:24.026 { 00:15:24.026 "subsystem": "bdev", 00:15:24.026 "config": [ 00:15:24.026 { 00:15:24.026 "method": "bdev_set_options", 00:15:24.026 "params": { 00:15:24.026 "bdev_io_pool_size": 65535, 00:15:24.026 "bdev_io_cache_size": 256, 00:15:24.026 "bdev_auto_examine": true, 00:15:24.026 "iobuf_small_cache_size": 128, 00:15:24.026 "iobuf_large_cache_size": 16 00:15:24.026 } 00:15:24.026 }, 00:15:24.026 { 00:15:24.026 "method": "bdev_raid_set_options", 00:15:24.026 "params": { 00:15:24.026 "process_window_size_kb": 1024, 00:15:24.026 "process_max_bandwidth_mb_sec": 0 00:15:24.026 } 00:15:24.026 }, 00:15:24.026 { 00:15:24.027 "method": "bdev_iscsi_set_options", 00:15:24.027 "params": { 00:15:24.027 "timeout_sec": 30 00:15:24.027 } 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "method": "bdev_nvme_set_options", 00:15:24.027 "params": { 00:15:24.027 "action_on_timeout": "none", 00:15:24.027 "timeout_us": 0, 00:15:24.027 "timeout_admin_us": 0, 00:15:24.027 "keep_alive_timeout_ms": 10000, 00:15:24.027 "arbitration_burst": 0, 00:15:24.027 "low_priority_weight": 0, 00:15:24.027 "medium_priority_weight": 0, 00:15:24.027 "high_priority_weight": 0, 00:15:24.027 "nvme_adminq_poll_period_us": 10000, 00:15:24.027 "nvme_ioq_poll_period_us": 0, 00:15:24.027 "io_queue_requests": 0, 00:15:24.027 "delay_cmd_submit": true, 00:15:24.027 "transport_retry_count": 4, 00:15:24.027 "bdev_retry_count": 3, 00:15:24.027 "transport_ack_timeout": 0, 00:15:24.027 "ctrlr_loss_timeout_sec": 0, 00:15:24.027 "reconnect_delay_sec": 0, 00:15:24.027 "fast_io_fail_timeout_sec": 0, 00:15:24.027 "disable_auto_failback": false, 00:15:24.027 "generate_uuids": false, 00:15:24.027 "transport_tos": 0, 00:15:24.027 "nvme_error_stat": false, 00:15:24.027 "rdma_srq_size": 0, 00:15:24.027 "io_path_stat": false, 00:15:24.027 "allow_accel_sequence": false, 00:15:24.027 "rdma_max_cq_size": 0, 00:15:24.027 "rdma_cm_event_timeout_ms": 0, 00:15:24.027 "dhchap_digests": [ 00:15:24.027 "sha256", 00:15:24.027 "sha384", 00:15:24.027 "sha512" 00:15:24.027 ], 00:15:24.027 "dhchap_dhgroups": [ 00:15:24.027 "null", 00:15:24.027 "ffdhe2048", 00:15:24.027 "ffdhe3072", 00:15:24.027 "ffdhe4096", 00:15:24.027 "ffdhe6144", 00:15:24.027 "ffdhe8192" 00:15:24.027 ] 00:15:24.027 } 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "method": "bdev_nvme_set_hotplug", 00:15:24.027 "params": { 00:15:24.027 "period_us": 100000, 00:15:24.027 "enable": false 00:15:24.027 } 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "method": "bdev_malloc_create", 00:15:24.027 "params": { 00:15:24.027 "name": "malloc0", 00:15:24.027 "num_blocks": 8192, 00:15:24.027 "block_size": 4096, 00:15:24.027 "physical_block_size": 4096, 00:15:24.027 "uuid": "bf11f813-768b-4e52-bbc5-0ed4225e50e9", 00:15:24.027 "optimal_io_boundary": 0, 00:15:24.027 "md_size": 0, 00:15:24.027 "dif_type": 0, 00:15:24.027 "dif_is_head_of_md": false, 00:15:24.027 "dif_pi_format": 0 00:15:24.027 } 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "method": "bdev_wait_for_examine" 00:15:24.027 } 00:15:24.027 ] 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "subsystem": "scsi", 00:15:24.027 "config": null 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "subsystem": "scheduler", 00:15:24.027 "config": [ 00:15:24.027 { 00:15:24.027 "method": "framework_set_scheduler", 00:15:24.027 "params": { 00:15:24.027 "name": "static" 00:15:24.027 } 00:15:24.027 } 00:15:24.027 ] 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "subsystem": "vhost_scsi", 00:15:24.027 "config": [] 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "subsystem": "vhost_blk", 00:15:24.027 "config": [] 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "subsystem": "ublk", 00:15:24.027 "config": [ 00:15:24.027 { 00:15:24.027 "method": "ublk_create_target", 00:15:24.027 "params": { 00:15:24.027 "cpumask": "1" 00:15:24.027 } 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "method": "ublk_start_disk", 00:15:24.027 "params": { 00:15:24.027 "bdev_name": "malloc0", 00:15:24.027 "ublk_id": 0, 00:15:24.027 "num_queues": 1, 00:15:24.027 "queue_depth": 128 00:15:24.027 } 00:15:24.027 } 00:15:24.027 ] 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "subsystem": "nbd", 00:15:24.027 "config": [] 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "subsystem": "nvmf", 00:15:24.027 "config": [ 00:15:24.027 { 00:15:24.027 "method": "nvmf_set_config", 00:15:24.027 "params": { 00:15:24.027 "discovery_filter": "match_any", 00:15:24.027 "admin_cmd_passthru": { 00:15:24.027 "identify_ctrlr": false 00:15:24.027 }, 00:15:24.027 "dhchap_digests": [ 00:15:24.027 "sha256", 00:15:24.027 "sha384", 00:15:24.027 "sha512" 00:15:24.027 ], 00:15:24.027 "dhchap_dhgroups": [ 00:15:24.027 "null", 00:15:24.027 "ffdhe2048", 00:15:24.027 "ffdhe3072", 00:15:24.027 "ffdhe4096", 00:15:24.027 "ffdhe6144", 00:15:24.027 "ffdhe8192" 00:15:24.027 ] 00:15:24.027 } 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "method": "nvmf_set_max_subsystems", 00:15:24.027 "params": { 00:15:24.027 "max_subsystems": 1024 00:15:24.027 } 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "method": "nvmf_set_crdt", 00:15:24.027 "params": { 00:15:24.027 "crdt1": 0, 00:15:24.027 "crdt2": 0, 00:15:24.027 "crdt3": 0 00:15:24.027 } 00:15:24.027 } 00:15:24.027 ] 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "subsystem": "iscsi", 00:15:24.027 "config": [ 00:15:24.027 { 00:15:24.027 "method": "iscsi_set_options", 00:15:24.027 "params": { 00:15:24.027 "node_base": "iqn.2016-06.io.spdk", 00:15:24.027 "max_sessions": 128, 00:15:24.027 "max_connections_per_session": 2, 00:15:24.027 "max_queue_depth": 64, 00:15:24.027 "default_time2wait": 2, 00:15:24.027 "default_time2retain": 20, 00:15:24.027 "first_burst_length": 8192, 00:15:24.027 "immediate_data": true, 00:15:24.027 "allow_duplicated_isid": false, 00:15:24.027 "error_recovery_level": 0, 00:15:24.027 "nop_timeout": 60, 00:15:24.027 "nop_in_interval": 30, 00:15:24.027 "disable_chap": false, 00:15:24.027 "require_chap": false, 00:15:24.027 "mutual_chap": false, 00:15:24.027 "chap_group": 0, 00:15:24.027 "max_large_datain_per_connection": 64, 00:15:24.027 "max_r2t_per_connection": 4, 00:15:24.027 "pdu_pool_size": 36864, 00:15:24.027 "immediate_data_pool_size": 16384, 00:15:24.027 "data_out_pool_size": 2048 00:15:24.027 } 00:15:24.027 } 00:15:24.027 ] 00:15:24.027 } 00:15:24.027 ] 00:15:24.027 }' 00:15:24.027 03:02:39 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 84464 00:15:24.027 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 84464 ']' 00:15:24.027 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 84464 00:15:24.027 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:15:24.027 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.027 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84464 00:15:24.027 killing process with pid 84464 00:15:24.027 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:24.027 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:24.027 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84464' 00:15:24.027 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 84464 00:15:24.027 03:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 84464 00:15:24.601 [2024-11-29 03:02:40.296802] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:24.601 [2024-11-29 03:02:40.334878] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:24.601 [2024-11-29 03:02:40.335054] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:24.601 [2024-11-29 03:02:40.343858] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:24.601 [2024-11-29 03:02:40.343951] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:24.601 [2024-11-29 03:02:40.343962] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:24.601 [2024-11-29 03:02:40.344000] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:24.601 [2024-11-29 03:02:40.344170] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:25.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.174 03:02:40 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=84508 00:15:25.174 03:02:40 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 84508 00:15:25.174 03:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 84508 ']' 00:15:25.174 03:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.174 03:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.174 03:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.174 03:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.174 03:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:25.174 03:02:40 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:25.174 03:02:40 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:25.174 "subsystems": [ 00:15:25.174 { 00:15:25.174 "subsystem": "fsdev", 00:15:25.174 "config": [ 00:15:25.174 { 00:15:25.174 "method": "fsdev_set_opts", 00:15:25.174 "params": { 00:15:25.174 "fsdev_io_pool_size": 65535, 00:15:25.174 "fsdev_io_cache_size": 256 00:15:25.174 } 00:15:25.174 } 00:15:25.174 ] 00:15:25.174 }, 00:15:25.174 { 00:15:25.174 "subsystem": "keyring", 00:15:25.174 "config": [] 00:15:25.174 }, 00:15:25.174 { 00:15:25.174 "subsystem": "iobuf", 00:15:25.174 "config": [ 00:15:25.174 { 00:15:25.174 "method": "iobuf_set_options", 00:15:25.174 "params": { 00:15:25.174 "small_pool_count": 8192, 00:15:25.174 "large_pool_count": 1024, 00:15:25.174 "small_bufsize": 8192, 00:15:25.174 "large_bufsize": 135168, 00:15:25.174 "enable_numa": false 00:15:25.174 } 00:15:25.174 } 00:15:25.174 ] 00:15:25.174 }, 00:15:25.174 { 00:15:25.174 "subsystem": "sock", 00:15:25.174 "config": [ 00:15:25.174 { 00:15:25.174 "method": "sock_set_default_impl", 00:15:25.174 "params": { 00:15:25.174 "impl_name": "posix" 00:15:25.174 } 00:15:25.174 }, 00:15:25.174 { 00:15:25.174 "method": "sock_impl_set_options", 00:15:25.174 "params": { 00:15:25.174 "impl_name": "ssl", 00:15:25.174 "recv_buf_size": 4096, 00:15:25.174 "send_buf_size": 4096, 00:15:25.174 "enable_recv_pipe": true, 00:15:25.174 "enable_quickack": false, 00:15:25.174 "enable_placement_id": 0, 00:15:25.174 "enable_zerocopy_send_server": true, 00:15:25.174 "enable_zerocopy_send_client": false, 00:15:25.174 "zerocopy_threshold": 0, 00:15:25.174 "tls_version": 0, 00:15:25.174 "enable_ktls": false 00:15:25.174 } 00:15:25.174 }, 00:15:25.174 { 00:15:25.174 "method": "sock_impl_set_options", 00:15:25.174 "params": { 00:15:25.174 "impl_name": "posix", 00:15:25.174 "recv_buf_size": 2097152, 00:15:25.174 "send_buf_size": 2097152, 00:15:25.174 "enable_recv_pipe": true, 00:15:25.174 "enable_quickack": false, 00:15:25.174 "enable_placement_id": 0, 00:15:25.174 "enable_zerocopy_send_server": true, 00:15:25.174 "enable_zerocopy_send_client": false, 00:15:25.174 "zerocopy_threshold": 0, 00:15:25.174 "tls_version": 0, 00:15:25.174 "enable_ktls": false 00:15:25.174 } 00:15:25.174 } 00:15:25.174 ] 00:15:25.174 }, 00:15:25.174 { 00:15:25.174 "subsystem": "vmd", 00:15:25.174 "config": [] 00:15:25.174 }, 00:15:25.174 { 00:15:25.174 "subsystem": "accel", 00:15:25.174 "config": [ 00:15:25.174 { 00:15:25.174 "method": "accel_set_options", 00:15:25.174 "params": { 00:15:25.174 "small_cache_size": 128, 00:15:25.174 "large_cache_size": 16, 00:15:25.174 "task_count": 2048, 00:15:25.174 "sequence_count": 2048, 00:15:25.174 "buf_count": 2048 00:15:25.174 } 00:15:25.174 } 00:15:25.174 ] 00:15:25.174 }, 00:15:25.174 { 00:15:25.174 "subsystem": "bdev", 00:15:25.174 "config": [ 00:15:25.174 { 00:15:25.174 "method": "bdev_set_options", 00:15:25.174 "params": { 00:15:25.174 "bdev_io_pool_size": 65535, 00:15:25.174 "bdev_io_cache_size": 256, 00:15:25.174 "bdev_auto_examine": true, 00:15:25.174 "iobuf_small_cache_size": 128, 00:15:25.174 "iobuf_large_cache_size": 16 00:15:25.174 } 00:15:25.174 }, 00:15:25.174 { 00:15:25.174 "method": "bdev_raid_set_options", 00:15:25.174 "params": { 00:15:25.174 "process_window_size_kb": 1024, 00:15:25.174 "process_max_bandwidth_mb_sec": 0 00:15:25.174 } 00:15:25.174 }, 00:15:25.174 { 00:15:25.174 "method": "bdev_iscsi_set_options", 00:15:25.174 "params": { 00:15:25.174 "timeout_sec": 30 00:15:25.174 } 00:15:25.174 }, 00:15:25.174 { 00:15:25.174 "method": "bdev_nvme_set_options", 00:15:25.174 "params": { 00:15:25.174 "action_on_timeout": "none", 00:15:25.174 "timeout_us": 0, 00:15:25.174 "timeout_admin_us": 0, 00:15:25.174 "keep_alive_timeout_ms": 10000, 00:15:25.174 "arbitration_burst": 0, 00:15:25.174 "low_priority_weight": 0, 00:15:25.174 "medium_priority_weight": 0, 00:15:25.174 "high_priority_weight": 0, 00:15:25.174 "nvme_adminq_poll_period_us": 10000, 00:15:25.174 "nvme_ioq_poll_period_us": 0, 00:15:25.174 "io_queue_requests": 0, 00:15:25.174 "delay_cmd_submit": true, 00:15:25.174 "transport_retry_count": 4, 00:15:25.175 "bdev_retry_count": 3, 00:15:25.175 "transport_ack_timeout": 0, 00:15:25.175 "ctrlr_loss_timeout_sec": 0, 00:15:25.175 "reconnect_delay_sec": 0, 00:15:25.175 "fast_io_fail_timeout_sec": 0, 00:15:25.175 "disable_auto_failback": false, 00:15:25.175 "generate_uuids": false, 00:15:25.175 "transport_tos": 0, 00:15:25.175 "nvme_error_stat": false, 00:15:25.175 "rdma_srq_size": 0, 00:15:25.175 "io_path_stat": false, 00:15:25.175 "allow_accel_sequence": false, 00:15:25.175 "rdma_max_cq_size": 0, 00:15:25.175 "rdma_cm_event_timeout_ms": 0, 00:15:25.175 "dhchap_digests": [ 00:15:25.175 "sha256", 00:15:25.175 "sha384", 00:15:25.175 "sha512" 00:15:25.175 ], 00:15:25.175 "dhchap_dhgroups": [ 00:15:25.175 "null", 00:15:25.175 "ffdhe2048", 00:15:25.175 "ffdhe3072", 00:15:25.175 "ffdhe4096", 00:15:25.175 "ffdhe6144", 00:15:25.175 "ffdhe8192" 00:15:25.175 ] 00:15:25.175 } 00:15:25.175 }, 00:15:25.175 { 00:15:25.175 "method": "bdev_nvme_set_hotplug", 00:15:25.175 "params": { 00:15:25.175 "period_us": 100000, 00:15:25.175 "enable": false 00:15:25.175 } 00:15:25.175 }, 00:15:25.175 { 00:15:25.175 "method": "bdev_malloc_create", 00:15:25.175 "params": { 00:15:25.175 "name": "malloc0", 00:15:25.175 "num_blocks": 8192, 00:15:25.175 "block_size": 4096, 00:15:25.175 "physical_block_size": 4096, 00:15:25.175 "uuid": "bf11f813-768b-4e52-bbc5-0ed4225e50e9", 00:15:25.175 "optimal_io_boundary": 0, 00:15:25.175 "md_size": 0, 00:15:25.175 "dif_type": 0, 00:15:25.175 "dif_is_head_of_md": false, 00:15:25.175 "dif_pi_format": 0 00:15:25.175 } 00:15:25.175 }, 00:15:25.175 { 00:15:25.175 "method": "bdev_wait_for_examine" 00:15:25.175 } 00:15:25.175 ] 00:15:25.175 }, 00:15:25.175 { 00:15:25.175 "subsystem": "scsi", 00:15:25.175 "config": null 00:15:25.175 }, 00:15:25.175 { 00:15:25.175 "subsystem": "scheduler", 00:15:25.175 "config": [ 00:15:25.175 { 00:15:25.175 "method": "framework_set_scheduler", 00:15:25.175 "params": { 00:15:25.175 "name": "static" 00:15:25.175 } 00:15:25.175 } 00:15:25.175 ] 00:15:25.175 }, 00:15:25.175 { 00:15:25.175 "subsystem": "vhost_scsi", 00:15:25.175 "config": [] 00:15:25.175 }, 00:15:25.175 { 00:15:25.175 "subsystem": "vhost_blk", 00:15:25.175 "config": [] 00:15:25.175 }, 00:15:25.175 { 00:15:25.175 "subsystem": "ublk", 00:15:25.175 "config": [ 00:15:25.175 { 00:15:25.175 "method": "ublk_create_target", 00:15:25.175 "params": { 00:15:25.175 "cpumask": "1" 00:15:25.175 } 00:15:25.175 }, 00:15:25.175 { 00:15:25.175 "method": "ublk_start_disk", 00:15:25.175 "params": { 00:15:25.175 "bdev_name": "malloc0", 00:15:25.175 "ublk_id": 0, 00:15:25.175 "num_queues": 1, 00:15:25.175 "queue_depth": 128 00:15:25.175 } 00:15:25.175 } 00:15:25.175 ] 00:15:25.175 }, 00:15:25.175 { 00:15:25.175 "subsystem": "nbd", 00:15:25.175 "config": [] 00:15:25.175 }, 00:15:25.175 { 00:15:25.175 "subsystem": "nvmf", 00:15:25.175 "config": [ 00:15:25.175 { 00:15:25.175 "method": "nvmf_set_config", 00:15:25.175 "params": { 00:15:25.175 "discovery_filter": "match_any", 00:15:25.175 "admin_cmd_passthru": { 00:15:25.175 "identify_ctrlr": false 00:15:25.175 }, 00:15:25.175 "dhchap_digests": [ 00:15:25.175 "sha256", 00:15:25.175 "sha384", 00:15:25.175 "sha512" 00:15:25.175 ], 00:15:25.175 "dhchap_dhgroups": [ 00:15:25.175 "null", 00:15:25.175 "ffdhe2048", 00:15:25.175 "ffdhe3072", 00:15:25.175 "ffdhe4096", 00:15:25.175 "ffdhe6144", 00:15:25.175 "ffdhe8192" 00:15:25.175 ] 00:15:25.175 } 00:15:25.175 }, 00:15:25.175 { 00:15:25.175 "method": "nvmf_set_max_subsystems", 00:15:25.175 "params": { 00:15:25.175 "max_subsystems": 1024 00:15:25.175 } 00:15:25.175 }, 00:15:25.175 { 00:15:25.175 "method": "nvmf_set_crdt", 00:15:25.175 "params": { 00:15:25.175 "crdt1": 0, 00:15:25.175 "crdt2": 0, 00:15:25.175 "crdt3": 0 00:15:25.175 } 00:15:25.175 } 00:15:25.175 ] 00:15:25.175 }, 00:15:25.175 { 00:15:25.175 "subsystem": "iscsi", 00:15:25.175 "config": [ 00:15:25.175 { 00:15:25.175 "method": "iscsi_set_options", 00:15:25.175 "params": { 00:15:25.175 "node_base": "iqn.2016-06.io.spdk", 00:15:25.175 "max_sessions": 128, 00:15:25.175 "max_connections_per_session": 2, 00:15:25.175 "max_queue_depth": 64, 00:15:25.175 "default_time2wait": 2, 00:15:25.175 "default_time2retain": 20, 00:15:25.175 "first_burst_length": 8192, 00:15:25.175 "immediate_data": true, 00:15:25.175 "allow_duplicated_isid": false, 00:15:25.175 "error_recovery_level": 0, 00:15:25.175 "nop_timeout": 60, 00:15:25.175 "nop_in_interval": 30, 00:15:25.175 "disable_chap": false, 00:15:25.175 "require_chap": false, 00:15:25.175 "mutual_chap": false, 00:15:25.175 "chap_group": 0, 00:15:25.175 "max_large_datain_per_connection": 64, 00:15:25.175 "max_r2t_per_connection": 4, 00:15:25.175 "pdu_pool_size": 36864, 00:15:25.175 "immediate_data_pool_size": 16384, 00:15:25.175 "data_out_pool_size": 2048 00:15:25.175 } 00:15:25.175 } 00:15:25.175 ] 00:15:25.175 } 00:15:25.175 ] 00:15:25.175 }' 00:15:25.175 [2024-11-29 03:02:41.040651] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:25.175 [2024-11-29 03:02:41.040805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84508 ] 00:15:25.436 [2024-11-29 03:02:41.188392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.436 [2024-11-29 03:02:41.227936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.007 [2024-11-29 03:02:41.726856] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:26.007 [2024-11-29 03:02:41.727291] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:26.007 [2024-11-29 03:02:41.735016] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:26.007 [2024-11-29 03:02:41.735116] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:26.007 [2024-11-29 03:02:41.735125] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:26.007 [2024-11-29 03:02:41.735139] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:26.007 [2024-11-29 03:02:41.743989] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:26.007 [2024-11-29 03:02:41.744036] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:26.008 [2024-11-29 03:02:41.750874] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:26.008 [2024-11-29 03:02:41.751007] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:26.008 [2024-11-29 03:02:41.767862] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 84508 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 84508 ']' 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 84508 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84508 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.008 killing process with pid 84508 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84508' 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 84508 00:15:26.008 03:02:41 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 84508 00:15:26.581 [2024-11-29 03:02:42.360152] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:26.581 [2024-11-29 03:02:42.398901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:26.582 [2024-11-29 03:02:42.399062] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:26.582 [2024-11-29 03:02:42.407880] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:26.582 [2024-11-29 03:02:42.407964] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:26.582 [2024-11-29 03:02:42.407983] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:26.582 [2024-11-29 03:02:42.408021] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:26.582 [2024-11-29 03:02:42.408189] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:27.154 03:02:43 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:27.154 ************************************ 00:15:27.154 END TEST test_save_ublk_config 00:15:27.154 ************************************ 00:15:27.154 00:15:27.154 real 0m4.492s 00:15:27.154 user 0m2.787s 00:15:27.154 sys 0m2.346s 00:15:27.154 03:02:43 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.154 03:02:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:27.154 03:02:43 ublk -- ublk/ublk.sh@139 -- # spdk_pid=84564 00:15:27.154 03:02:43 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:27.154 03:02:43 ublk -- ublk/ublk.sh@141 -- # waitforlisten 84564 00:15:27.154 03:02:43 ublk -- common/autotest_common.sh@835 -- # '[' -z 84564 ']' 00:15:27.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.154 03:02:43 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.154 03:02:43 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.154 03:02:43 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.154 03:02:43 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:27.154 03:02:43 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.154 03:02:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.416 [2024-11-29 03:02:43.172545] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:27.416 [2024-11-29 03:02:43.172687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84564 ] 00:15:27.416 [2024-11-29 03:02:43.320717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:27.416 [2024-11-29 03:02:43.362399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.416 [2024-11-29 03:02:43.362495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.362 03:02:44 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.362 03:02:44 ublk -- common/autotest_common.sh@868 -- # return 0 00:15:28.362 03:02:44 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:28.362 03:02:44 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:28.362 03:02:44 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.362 03:02:44 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:28.362 ************************************ 00:15:28.362 START TEST test_create_ublk 00:15:28.362 ************************************ 00:15:28.362 03:02:44 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:28.362 03:02:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.362 03:02:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:28.362 [2024-11-29 03:02:44.046859] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:28.362 [2024-11-29 03:02:44.049044] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:28.362 03:02:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:28.362 03:02:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.362 03:02:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:28.362 03:02:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:28.362 03:02:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.362 03:02:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:28.362 [2024-11-29 03:02:44.165048] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:28.362 [2024-11-29 03:02:44.165616] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:28.362 [2024-11-29 03:02:44.165640] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:28.362 [2024-11-29 03:02:44.165652] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:28.362 [2024-11-29 03:02:44.172950] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:28.362 [2024-11-29 03:02:44.173011] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:28.362 [2024-11-29 03:02:44.180883] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:28.362 [2024-11-29 03:02:44.181701] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:28.362 [2024-11-29 03:02:44.195993] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:28.362 03:02:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:28.362 03:02:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.362 03:02:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:28.362 03:02:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:28.362 { 00:15:28.362 "ublk_device": "/dev/ublkb0", 00:15:28.362 "id": 0, 00:15:28.362 "queue_depth": 512, 00:15:28.362 "num_queues": 4, 00:15:28.362 "bdev_name": "Malloc0" 00:15:28.362 } 00:15:28.362 ]' 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:28.362 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:28.624 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:28.625 03:02:44 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:28.625 03:02:44 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:28.625 03:02:44 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:28.625 03:02:44 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:28.625 03:02:44 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:28.625 03:02:44 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:28.625 03:02:44 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:28.625 03:02:44 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:28.625 03:02:44 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:28.625 03:02:44 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:28.625 03:02:44 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:28.625 03:02:44 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:28.625 fio: verification read phase will never start because write phase uses all of runtime 00:15:28.625 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:28.625 fio-3.35 00:15:28.625 Starting 1 process 00:15:38.674 00:15:38.674 fio_test: (groupid=0, jobs=1): err= 0: pid=84604: Fri Nov 29 03:02:54 2024 00:15:38.674 write: IOPS=13.3k, BW=52.1MiB/s (54.6MB/s)(521MiB/10001msec); 0 zone resets 00:15:38.674 clat (usec): min=51, max=8422, avg=74.29, stdev=141.02 00:15:38.674 lat (usec): min=51, max=8440, avg=74.68, stdev=141.12 00:15:38.674 clat percentiles (usec): 00:15:38.674 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 61], 00:15:38.674 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 69], 00:15:38.674 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 76], 95.00th=[ 79], 00:15:38.674 | 99.00th=[ 137], 99.50th=[ 281], 99.90th=[ 2999], 99.95th=[ 3687], 00:15:38.674 | 99.99th=[ 4228] 00:15:38.674 bw ( KiB/s): min= 7640, max=58520, per=99.63%, avg=53149.16, stdev=11591.05, samples=19 00:15:38.674 iops : min= 1910, max=14630, avg=13287.26, stdev=2897.76, samples=19 00:15:38.674 lat (usec) : 100=98.47%, 250=0.82%, 500=0.46%, 750=0.01%, 1000=0.01% 00:15:38.674 lat (msec) : 2=0.06%, 4=0.14%, 10=0.02% 00:15:38.674 cpu : usr=1.88%, sys=12.18%, ctx=133375, majf=0, minf=796 00:15:38.674 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.674 issued rwts: total=0,133373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.674 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:38.674 00:15:38.674 Run status group 0 (all jobs): 00:15:38.674 WRITE: bw=52.1MiB/s (54.6MB/s), 52.1MiB/s-52.1MiB/s (54.6MB/s-54.6MB/s), io=521MiB (546MB), run=10001-10001msec 00:15:38.674 00:15:38.674 Disk stats (read/write): 00:15:38.674 ublkb0: ios=0/131870, merge=0/0, ticks=0/8317, in_queue=8317, util=99.09% 00:15:38.674 03:02:54 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:38.674 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.674 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.674 [2024-11-29 03:02:54.609576] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:38.674 [2024-11-29 03:02:54.642444] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:38.674 [2024-11-29 03:02:54.643344] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:38.674 [2024-11-29 03:02:54.648884] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:38.674 [2024-11-29 03:02:54.653090] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:38.674 [2024-11-29 03:02:54.653102] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:38.674 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.674 03:02:54 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:38.674 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:15:38.674 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:38.674 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:38.674 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.674 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:38.674 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:38.674 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:15:38.674 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.674 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.674 [2024-11-29 03:02:54.662943] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:38.933 request: 00:15:38.933 { 00:15:38.933 "ublk_id": 0, 00:15:38.933 "method": "ublk_stop_disk", 00:15:38.933 "req_id": 1 00:15:38.933 } 00:15:38.933 Got JSON-RPC error response 00:15:38.933 response: 00:15:38.933 { 00:15:38.933 "code": -19, 00:15:38.933 "message": "No such device" 00:15:38.933 } 00:15:38.933 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:38.933 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:15:38.933 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:38.933 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:38.933 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:38.933 03:02:54 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:38.933 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.933 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.933 [2024-11-29 03:02:54.671918] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:38.933 [2024-11-29 03:02:54.673160] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:38.933 [2024-11-29 03:02:54.673188] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:38.933 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.933 03:02:54 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:38.933 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.933 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.933 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.933 03:02:54 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:38.933 03:02:54 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:38.933 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.933 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.934 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.934 03:02:54 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:38.934 03:02:54 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:38.934 03:02:54 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:38.934 03:02:54 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:38.934 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.934 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.934 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.934 03:02:54 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:38.934 03:02:54 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:38.934 ************************************ 00:15:38.934 END TEST test_create_ublk 00:15:38.934 ************************************ 00:15:38.934 03:02:54 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:38.934 00:15:38.934 real 0m10.811s 00:15:38.934 user 0m0.466s 00:15:38.934 sys 0m1.307s 00:15:38.934 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.934 03:02:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.934 03:02:54 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:38.934 03:02:54 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:38.934 03:02:54 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.934 03:02:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.934 ************************************ 00:15:38.934 START TEST test_create_multi_ublk 00:15:38.934 ************************************ 00:15:38.934 03:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:15:38.934 03:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:38.934 03:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.934 03:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.934 [2024-11-29 03:02:54.902840] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:38.934 [2024-11-29 03:02:54.903961] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:38.934 03:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.934 03:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:38.934 03:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:38.934 03:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:38.934 03:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:38.934 03:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.934 03:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.193 03:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.193 03:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:39.193 03:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:39.193 03:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.193 03:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.193 [2024-11-29 03:02:54.987245] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:39.193 [2024-11-29 03:02:54.987560] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:39.193 [2024-11-29 03:02:54.987573] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:39.193 [2024-11-29 03:02:54.987578] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:39.193 [2024-11-29 03:02:54.998889] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:39.193 [2024-11-29 03:02:54.998907] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:39.193 [2024-11-29 03:02:55.010860] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:39.193 [2024-11-29 03:02:55.011374] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:39.193 [2024-11-29 03:02:55.024925] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:39.193 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.193 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:39.193 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.193 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:39.193 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.193 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.193 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.193 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:39.193 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:39.193 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.193 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.193 [2024-11-29 03:02:55.120951] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:39.193 [2024-11-29 03:02:55.121285] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:39.194 [2024-11-29 03:02:55.121297] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:39.194 [2024-11-29 03:02:55.121304] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:39.194 [2024-11-29 03:02:55.132879] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:39.194 [2024-11-29 03:02:55.132900] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:39.194 [2024-11-29 03:02:55.144858] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:39.194 [2024-11-29 03:02:55.145393] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:39.194 [2024-11-29 03:02:55.150700] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:39.194 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.194 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:39.194 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.194 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:39.194 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.194 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.451 [2024-11-29 03:02:55.256948] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:39.451 [2024-11-29 03:02:55.257267] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:39.451 [2024-11-29 03:02:55.257281] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:39.451 [2024-11-29 03:02:55.257287] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:39.451 [2024-11-29 03:02:55.268868] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:39.451 [2024-11-29 03:02:55.268885] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:39.451 [2024-11-29 03:02:55.280852] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:39.451 [2024-11-29 03:02:55.281369] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:39.451 [2024-11-29 03:02:55.305860] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.451 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.451 [2024-11-29 03:02:55.412940] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:39.451 [2024-11-29 03:02:55.413268] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:39.451 [2024-11-29 03:02:55.413281] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:39.451 [2024-11-29 03:02:55.413287] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:39.451 [2024-11-29 03:02:55.424860] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:39.451 [2024-11-29 03:02:55.424883] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:39.451 [2024-11-29 03:02:55.436865] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:39.451 [2024-11-29 03:02:55.437382] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:39.709 [2024-11-29 03:02:55.461847] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:39.709 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:39.710 { 00:15:39.710 "ublk_device": "/dev/ublkb0", 00:15:39.710 "id": 0, 00:15:39.710 "queue_depth": 512, 00:15:39.710 "num_queues": 4, 00:15:39.710 "bdev_name": "Malloc0" 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "ublk_device": "/dev/ublkb1", 00:15:39.710 "id": 1, 00:15:39.710 "queue_depth": 512, 00:15:39.710 "num_queues": 4, 00:15:39.710 "bdev_name": "Malloc1" 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "ublk_device": "/dev/ublkb2", 00:15:39.710 "id": 2, 00:15:39.710 "queue_depth": 512, 00:15:39.710 "num_queues": 4, 00:15:39.710 "bdev_name": "Malloc2" 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "ublk_device": "/dev/ublkb3", 00:15:39.710 "id": 3, 00:15:39.710 "queue_depth": 512, 00:15:39.710 "num_queues": 4, 00:15:39.710 "bdev_name": "Malloc3" 00:15:39.710 } 00:15:39.710 ]' 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:39.710 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:39.969 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:40.229 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:40.229 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:40.229 03:02:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:40.229 [2024-11-29 03:02:56.156931] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:40.229 [2024-11-29 03:02:56.192850] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:40.229 [2024-11-29 03:02:56.193763] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:40.229 [2024-11-29 03:02:56.205904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:40.229 [2024-11-29 03:02:56.206148] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:40.229 [2024-11-29 03:02:56.206155] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.229 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:40.488 [2024-11-29 03:02:56.228923] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:40.488 [2024-11-29 03:02:56.280850] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:40.489 [2024-11-29 03:02:56.281698] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:40.489 [2024-11-29 03:02:56.292852] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:40.489 [2024-11-29 03:02:56.293102] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:40.489 [2024-11-29 03:02:56.293108] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:40.489 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.489 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:40.489 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:40.489 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.489 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:40.489 [2024-11-29 03:02:56.304925] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:40.489 [2024-11-29 03:02:56.336888] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:40.489 [2024-11-29 03:02:56.337634] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:40.489 [2024-11-29 03:02:56.349881] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:40.489 [2024-11-29 03:02:56.350118] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:40.489 [2024-11-29 03:02:56.350124] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:40.489 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.489 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:40.489 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:40.489 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.489 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:40.489 [2024-11-29 03:02:56.372914] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:40.489 [2024-11-29 03:02:56.419880] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:40.489 [2024-11-29 03:02:56.420526] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:40.489 [2024-11-29 03:02:56.436860] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:40.489 [2024-11-29 03:02:56.437086] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:40.489 [2024-11-29 03:02:56.437092] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:40.489 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.489 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:40.747 [2024-11-29 03:02:56.636903] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:40.747 [2024-11-29 03:02:56.638085] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:40.747 [2024-11-29 03:02:56.638115] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:40.747 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:40.747 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:40.747 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:40.747 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.747 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:41.006 03:02:56 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:41.264 03:02:57 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:41.265 03:02:57 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:41.265 03:02:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.265 03:02:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.265 03:02:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.265 03:02:57 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:41.265 03:02:57 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:41.265 ************************************ 00:15:41.265 END TEST test_create_multi_ublk 00:15:41.265 ************************************ 00:15:41.265 03:02:57 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:41.265 00:15:41.265 real 0m2.170s 00:15:41.265 user 0m0.832s 00:15:41.265 sys 0m0.137s 00:15:41.265 03:02:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.265 03:02:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.265 03:02:57 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:41.265 03:02:57 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:41.265 03:02:57 ublk -- ublk/ublk.sh@130 -- # killprocess 84564 00:15:41.265 03:02:57 ublk -- common/autotest_common.sh@954 -- # '[' -z 84564 ']' 00:15:41.265 03:02:57 ublk -- common/autotest_common.sh@958 -- # kill -0 84564 00:15:41.265 03:02:57 ublk -- common/autotest_common.sh@959 -- # uname 00:15:41.265 03:02:57 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.265 03:02:57 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84564 00:15:41.265 killing process with pid 84564 00:15:41.265 03:02:57 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.265 03:02:57 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.265 03:02:57 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84564' 00:15:41.265 03:02:57 ublk -- common/autotest_common.sh@973 -- # kill 84564 00:15:41.265 03:02:57 ublk -- common/autotest_common.sh@978 -- # wait 84564 00:15:41.525 [2024-11-29 03:02:57.332616] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:41.525 [2024-11-29 03:02:57.332693] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:41.787 00:15:41.787 real 0m19.237s 00:15:41.787 user 0m28.294s 00:15:41.787 sys 0m8.647s 00:15:41.787 03:02:57 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.787 ************************************ 00:15:41.787 END TEST ublk 00:15:41.787 03:02:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.787 ************************************ 00:15:41.787 03:02:57 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:41.787 03:02:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:41.787 03:02:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.787 03:02:57 -- common/autotest_common.sh@10 -- # set +x 00:15:41.787 ************************************ 00:15:41.787 START TEST ublk_recovery 00:15:41.787 ************************************ 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:41.788 * Looking for test storage... 00:15:41.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.788 03:02:57 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:41.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.788 --rc genhtml_branch_coverage=1 00:15:41.788 --rc genhtml_function_coverage=1 00:15:41.788 --rc genhtml_legend=1 00:15:41.788 --rc geninfo_all_blocks=1 00:15:41.788 --rc geninfo_unexecuted_blocks=1 00:15:41.788 00:15:41.788 ' 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:41.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.788 --rc genhtml_branch_coverage=1 00:15:41.788 --rc genhtml_function_coverage=1 00:15:41.788 --rc genhtml_legend=1 00:15:41.788 --rc geninfo_all_blocks=1 00:15:41.788 --rc geninfo_unexecuted_blocks=1 00:15:41.788 00:15:41.788 ' 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:41.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.788 --rc genhtml_branch_coverage=1 00:15:41.788 --rc genhtml_function_coverage=1 00:15:41.788 --rc genhtml_legend=1 00:15:41.788 --rc geninfo_all_blocks=1 00:15:41.788 --rc geninfo_unexecuted_blocks=1 00:15:41.788 00:15:41.788 ' 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:41.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.788 --rc genhtml_branch_coverage=1 00:15:41.788 --rc genhtml_function_coverage=1 00:15:41.788 --rc genhtml_legend=1 00:15:41.788 --rc geninfo_all_blocks=1 00:15:41.788 --rc geninfo_unexecuted_blocks=1 00:15:41.788 00:15:41.788 ' 00:15:41.788 03:02:57 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:41.788 03:02:57 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:41.788 03:02:57 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:41.788 03:02:57 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:41.788 03:02:57 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:41.788 03:02:57 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:41.788 03:02:57 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:41.788 03:02:57 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:41.788 03:02:57 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:41.788 03:02:57 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:41.788 03:02:57 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=84930 00:15:41.788 03:02:57 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:41.788 03:02:57 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:41.788 03:02:57 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 84930 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 84930 ']' 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.788 03:02:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.050 [2024-11-29 03:02:57.858300] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:42.050 [2024-11-29 03:02:57.858688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84930 ] 00:15:42.050 [2024-11-29 03:02:58.001676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:42.050 [2024-11-29 03:02:58.026528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.050 [2024-11-29 03:02:58.026617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.994 03:02:58 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.994 03:02:58 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:15:42.994 03:02:58 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:42.994 03:02:58 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.994 03:02:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.994 [2024-11-29 03:02:58.698850] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:42.994 [2024-11-29 03:02:58.700092] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:42.994 03:02:58 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.994 03:02:58 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:42.994 03:02:58 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.994 03:02:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.994 malloc0 00:15:42.994 03:02:58 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.994 03:02:58 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:42.994 03:02:58 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.994 03:02:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.994 [2024-11-29 03:02:58.738970] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:42.994 [2024-11-29 03:02:58.739054] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:42.994 [2024-11-29 03:02:58.739060] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:42.994 [2024-11-29 03:02:58.739069] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:42.994 [2024-11-29 03:02:58.747948] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:42.994 [2024-11-29 03:02:58.747972] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:42.994 [2024-11-29 03:02:58.754857] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:42.994 [2024-11-29 03:02:58.754988] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:42.994 [2024-11-29 03:02:58.776851] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:42.994 1 00:15:42.994 03:02:58 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.994 03:02:58 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:43.936 03:02:59 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=84963 00:15:43.936 03:02:59 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:43.936 03:02:59 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:43.936 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:43.936 fio-3.35 00:15:43.936 Starting 1 process 00:15:49.207 03:03:04 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 84930 00:15:49.207 03:03:04 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:54.494 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 84930 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:54.494 03:03:09 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:54.494 03:03:09 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=85075 00:15:54.494 03:03:09 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:54.494 03:03:09 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 85075 00:15:54.494 03:03:09 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85075 ']' 00:15:54.494 03:03:09 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.494 03:03:09 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.494 03:03:09 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.494 03:03:09 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.494 03:03:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.494 [2024-11-29 03:03:09.868739] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:54.495 [2024-11-29 03:03:09.869028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85075 ] 00:15:54.495 [2024-11-29 03:03:10.011157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:54.495 [2024-11-29 03:03:10.029460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.495 [2024-11-29 03:03:10.029583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.753 03:03:10 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.753 03:03:10 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:15:54.753 03:03:10 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:54.753 03:03:10 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.753 03:03:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.753 [2024-11-29 03:03:10.654842] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:54.753 [2024-11-29 03:03:10.655808] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:54.753 03:03:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.753 03:03:10 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:54.753 03:03:10 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.753 03:03:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.753 malloc0 00:15:54.753 03:03:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.753 03:03:10 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:54.753 03:03:10 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.753 03:03:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.753 [2024-11-29 03:03:10.686936] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:54.753 [2024-11-29 03:03:10.686971] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:54.753 [2024-11-29 03:03:10.686977] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:54.753 [2024-11-29 03:03:10.694871] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:54.753 [2024-11-29 03:03:10.694886] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:15:54.753 [2024-11-29 03:03:10.694895] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:54.753 [2024-11-29 03:03:10.694945] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:54.753 1 00:15:54.753 03:03:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.753 03:03:10 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 84963 00:15:54.753 [2024-11-29 03:03:10.702848] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:54.753 [2024-11-29 03:03:10.709378] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:54.753 [2024-11-29 03:03:10.717019] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:54.753 [2024-11-29 03:03:10.717036] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:50.974 00:16:50.974 fio_test: (groupid=0, jobs=1): err= 0: pid=84967: Fri Nov 29 03:04:00 2024 00:16:50.974 read: IOPS=25.2k, BW=98.5MiB/s (103MB/s)(5909MiB/60001msec) 00:16:50.974 slat (nsec): min=1047, max=151689, avg=5555.39, stdev=1310.47 00:16:50.974 clat (usec): min=706, max=5934.4k, avg=2478.32, stdev=37670.04 00:16:50.974 lat (usec): min=712, max=5934.4k, avg=2483.87, stdev=37670.03 00:16:50.974 clat percentiles (usec): 00:16:50.974 | 1.00th=[ 1745], 5.00th=[ 1942], 10.00th=[ 2024], 20.00th=[ 2057], 00:16:50.974 | 30.00th=[ 2073], 40.00th=[ 2089], 50.00th=[ 2114], 60.00th=[ 2114], 00:16:50.974 | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2343], 95.00th=[ 3195], 00:16:50.974 | 99.00th=[ 5407], 99.50th=[ 5997], 99.90th=[ 7701], 99.95th=[ 8717], 00:16:50.974 | 99.99th=[13173] 00:16:50.974 bw ( KiB/s): min=28160, max=131112, per=100.00%, avg=111050.22, stdev=13863.81, samples=108 00:16:50.974 iops : min= 7040, max=32778, avg=27762.56, stdev=3465.95, samples=108 00:16:50.974 write: IOPS=25.2k, BW=98.3MiB/s (103MB/s)(5900MiB/60001msec); 0 zone resets 00:16:50.974 slat (nsec): min=1043, max=3882.0k, avg=5765.57, stdev=3430.31 00:16:50.974 clat (usec): min=718, max=5934.9k, avg=2590.42, stdev=39505.93 00:16:50.974 lat (usec): min=723, max=5934.9k, avg=2596.19, stdev=39505.92 00:16:50.974 clat percentiles (usec): 00:16:50.974 | 1.00th=[ 1827], 5.00th=[ 2024], 10.00th=[ 2114], 20.00th=[ 2147], 00:16:50.974 | 30.00th=[ 2180], 40.00th=[ 2180], 50.00th=[ 2212], 60.00th=[ 2212], 00:16:50.974 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2376], 95.00th=[ 3130], 00:16:50.974 | 99.00th=[ 5473], 99.50th=[ 6128], 99.90th=[ 7898], 99.95th=[ 8848], 00:16:50.974 | 99.99th=[13304] 00:16:50.974 bw ( KiB/s): min=28256, max=133720, per=100.00%, avg=110905.26, stdev=13881.10, samples=108 00:16:50.974 iops : min= 7064, max=33430, avg=27726.31, stdev=3470.27, samples=108 00:16:50.974 lat (usec) : 750=0.01%, 1000=0.01% 00:16:50.974 lat (msec) : 2=6.14%, 4=91.03%, 10=2.79%, 20=0.03%, >=2000=0.01% 00:16:50.974 cpu : usr=5.52%, sys=29.57%, ctx=99292, majf=0, minf=13 00:16:50.974 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:50.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:50.974 issued rwts: total=1512586,1510524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.974 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:50.974 00:16:50.974 Run status group 0 (all jobs): 00:16:50.974 READ: bw=98.5MiB/s (103MB/s), 98.5MiB/s-98.5MiB/s (103MB/s-103MB/s), io=5909MiB (6196MB), run=60001-60001msec 00:16:50.974 WRITE: bw=98.3MiB/s (103MB/s), 98.3MiB/s-98.3MiB/s (103MB/s-103MB/s), io=5900MiB (6187MB), run=60001-60001msec 00:16:50.974 00:16:50.974 Disk stats (read/write): 00:16:50.974 ublkb1: ios=1509550/1507425, merge=0/0, ticks=3654949/3687141, in_queue=7342090, util=99.89% 00:16:50.974 03:04:00 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.974 [2024-11-29 03:04:00.043446] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:50.974 [2024-11-29 03:04:00.080870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:50.974 [2024-11-29 03:04:00.081064] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:50.974 [2024-11-29 03:04:00.089871] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:50.974 [2024-11-29 03:04:00.089982] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:50.974 [2024-11-29 03:04:00.089990] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.974 03:04:00 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.974 [2024-11-29 03:04:00.102932] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:50.974 [2024-11-29 03:04:00.108319] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:50.974 [2024-11-29 03:04:00.108355] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.974 03:04:00 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:50.974 03:04:00 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:50.974 03:04:00 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 85075 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 85075 ']' 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 85075 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85075 00:16:50.974 killing process with pid 85075 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85075' 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@973 -- # kill 85075 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@978 -- # wait 85075 00:16:50.974 [2024-11-29 03:04:00.360235] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:50.974 [2024-11-29 03:04:00.360292] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:50.974 00:16:50.974 real 1m3.101s 00:16:50.974 user 1m37.372s 00:16:50.974 sys 0m39.241s 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.974 03:04:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.974 ************************************ 00:16:50.975 END TEST ublk_recovery 00:16:50.975 ************************************ 00:16:50.975 03:04:00 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:16:50.975 03:04:00 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:16:50.975 03:04:00 -- spdk/autotest.sh@260 -- # timing_exit lib 00:16:50.975 03:04:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:50.975 03:04:00 -- common/autotest_common.sh@10 -- # set +x 00:16:50.975 03:04:00 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:16:50.975 03:04:00 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:16:50.975 03:04:00 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:16:50.975 03:04:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:50.975 03:04:00 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:50.975 03:04:00 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:16:50.975 03:04:00 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:16:50.975 03:04:00 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:16:50.975 03:04:00 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:50.975 03:04:00 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:16:50.975 03:04:00 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:50.975 03:04:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:50.975 03:04:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.975 03:04:00 -- common/autotest_common.sh@10 -- # set +x 00:16:50.975 ************************************ 00:16:50.975 START TEST ftl 00:16:50.975 ************************************ 00:16:50.975 03:04:00 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:50.975 * Looking for test storage... 00:16:50.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:50.975 03:04:00 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:50.975 03:04:00 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:16:50.975 03:04:00 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:50.975 03:04:00 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:50.975 03:04:00 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.975 03:04:00 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.975 03:04:00 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.975 03:04:00 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.975 03:04:00 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.975 03:04:00 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.975 03:04:00 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.975 03:04:00 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.975 03:04:00 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.975 03:04:00 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.975 03:04:00 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.975 03:04:00 ftl -- scripts/common.sh@344 -- # case "$op" in 00:16:50.975 03:04:00 ftl -- scripts/common.sh@345 -- # : 1 00:16:50.975 03:04:00 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.975 03:04:00 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.975 03:04:00 ftl -- scripts/common.sh@365 -- # decimal 1 00:16:50.975 03:04:00 ftl -- scripts/common.sh@353 -- # local d=1 00:16:50.975 03:04:00 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.975 03:04:00 ftl -- scripts/common.sh@355 -- # echo 1 00:16:50.975 03:04:00 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.975 03:04:00 ftl -- scripts/common.sh@366 -- # decimal 2 00:16:50.975 03:04:00 ftl -- scripts/common.sh@353 -- # local d=2 00:16:50.975 03:04:00 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.975 03:04:00 ftl -- scripts/common.sh@355 -- # echo 2 00:16:50.975 03:04:00 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.975 03:04:00 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.975 03:04:00 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.975 03:04:00 ftl -- scripts/common.sh@368 -- # return 0 00:16:50.975 03:04:00 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.975 03:04:00 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:50.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.975 --rc genhtml_branch_coverage=1 00:16:50.975 --rc genhtml_function_coverage=1 00:16:50.975 --rc genhtml_legend=1 00:16:50.975 --rc geninfo_all_blocks=1 00:16:50.975 --rc geninfo_unexecuted_blocks=1 00:16:50.975 00:16:50.975 ' 00:16:50.975 03:04:00 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:50.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.975 --rc genhtml_branch_coverage=1 00:16:50.975 --rc genhtml_function_coverage=1 00:16:50.975 --rc genhtml_legend=1 00:16:50.975 --rc geninfo_all_blocks=1 00:16:50.975 --rc geninfo_unexecuted_blocks=1 00:16:50.975 00:16:50.975 ' 00:16:50.975 03:04:00 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:50.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.975 --rc genhtml_branch_coverage=1 00:16:50.975 --rc genhtml_function_coverage=1 00:16:50.975 --rc genhtml_legend=1 00:16:50.975 --rc geninfo_all_blocks=1 00:16:50.975 --rc geninfo_unexecuted_blocks=1 00:16:50.975 00:16:50.975 ' 00:16:50.975 03:04:00 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:50.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.975 --rc genhtml_branch_coverage=1 00:16:50.975 --rc genhtml_function_coverage=1 00:16:50.975 --rc genhtml_legend=1 00:16:50.975 --rc geninfo_all_blocks=1 00:16:50.975 --rc geninfo_unexecuted_blocks=1 00:16:50.975 00:16:50.975 ' 00:16:50.975 03:04:00 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:50.975 03:04:00 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:50.975 03:04:00 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:50.975 03:04:01 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:50.975 03:04:01 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:50.975 03:04:01 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:50.975 03:04:01 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.975 03:04:01 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:50.975 03:04:01 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:50.975 03:04:01 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.975 03:04:01 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.975 03:04:01 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:50.975 03:04:01 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:50.975 03:04:01 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:50.975 03:04:01 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:50.975 03:04:01 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:50.975 03:04:01 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:50.975 03:04:01 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.975 03:04:01 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.975 03:04:01 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:50.975 03:04:01 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:50.975 03:04:01 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:50.976 03:04:01 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:50.976 03:04:01 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:50.976 03:04:01 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:50.976 03:04:01 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:50.976 03:04:01 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:50.976 03:04:01 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:50.976 03:04:01 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:50.976 03:04:01 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.976 03:04:01 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:50.976 03:04:01 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:50.976 03:04:01 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:50.976 03:04:01 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:50.976 03:04:01 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:50.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:50.976 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:50.976 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:50.976 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:50.976 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:50.976 03:04:01 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=85874 00:16:50.976 03:04:01 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:50.976 03:04:01 ftl -- ftl/ftl.sh@38 -- # waitforlisten 85874 00:16:50.976 03:04:01 ftl -- common/autotest_common.sh@835 -- # '[' -z 85874 ']' 00:16:50.976 03:04:01 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.976 03:04:01 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.976 03:04:01 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.976 03:04:01 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.976 03:04:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:50.976 [2024-11-29 03:04:01.554216] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:50.976 [2024-11-29 03:04:01.554521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85874 ] 00:16:50.976 [2024-11-29 03:04:01.705201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.976 [2024-11-29 03:04:01.746596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.976 03:04:02 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.976 03:04:02 ftl -- common/autotest_common.sh@868 -- # return 0 00:16:50.976 03:04:02 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:50.976 03:04:02 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:50.976 03:04:03 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:50.976 03:04:03 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:50.976 03:04:03 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:50.976 03:04:03 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:50.976 03:04:03 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:50.976 03:04:03 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:50.976 03:04:03 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:50.976 03:04:03 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:50.976 03:04:03 ftl -- ftl/ftl.sh@50 -- # break 00:16:50.976 03:04:03 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:50.976 03:04:03 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:50.976 03:04:03 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:50.976 03:04:03 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:50.976 03:04:04 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:50.976 03:04:04 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:50.976 03:04:04 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:50.976 03:04:04 ftl -- ftl/ftl.sh@63 -- # break 00:16:50.976 03:04:04 ftl -- ftl/ftl.sh@66 -- # killprocess 85874 00:16:50.976 03:04:04 ftl -- common/autotest_common.sh@954 -- # '[' -z 85874 ']' 00:16:50.976 03:04:04 ftl -- common/autotest_common.sh@958 -- # kill -0 85874 00:16:50.976 03:04:04 ftl -- common/autotest_common.sh@959 -- # uname 00:16:50.976 03:04:04 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.976 03:04:04 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85874 00:16:50.976 killing process with pid 85874 00:16:50.976 03:04:04 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.976 03:04:04 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.976 03:04:04 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85874' 00:16:50.976 03:04:04 ftl -- common/autotest_common.sh@973 -- # kill 85874 00:16:50.976 03:04:04 ftl -- common/autotest_common.sh@978 -- # wait 85874 00:16:50.976 03:04:04 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:50.976 03:04:04 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:50.976 03:04:04 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:50.976 03:04:04 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.976 03:04:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:50.976 ************************************ 00:16:50.976 START TEST ftl_fio_basic 00:16:50.976 ************************************ 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:50.976 * Looking for test storage... 00:16:50.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.976 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:50.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.977 --rc genhtml_branch_coverage=1 00:16:50.977 --rc genhtml_function_coverage=1 00:16:50.977 --rc genhtml_legend=1 00:16:50.977 --rc geninfo_all_blocks=1 00:16:50.977 --rc geninfo_unexecuted_blocks=1 00:16:50.977 00:16:50.977 ' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:50.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.977 --rc genhtml_branch_coverage=1 00:16:50.977 --rc genhtml_function_coverage=1 00:16:50.977 --rc genhtml_legend=1 00:16:50.977 --rc geninfo_all_blocks=1 00:16:50.977 --rc geninfo_unexecuted_blocks=1 00:16:50.977 00:16:50.977 ' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:50.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.977 --rc genhtml_branch_coverage=1 00:16:50.977 --rc genhtml_function_coverage=1 00:16:50.977 --rc genhtml_legend=1 00:16:50.977 --rc geninfo_all_blocks=1 00:16:50.977 --rc geninfo_unexecuted_blocks=1 00:16:50.977 00:16:50.977 ' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:50.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.977 --rc genhtml_branch_coverage=1 00:16:50.977 --rc genhtml_function_coverage=1 00:16:50.977 --rc genhtml_legend=1 00:16:50.977 --rc geninfo_all_blocks=1 00:16:50.977 --rc geninfo_unexecuted_blocks=1 00:16:50.977 00:16:50.977 ' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=85993 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 85993 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 85993 ']' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:50.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.977 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.978 03:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:50.978 [2024-11-29 03:04:04.635215] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:50.978 [2024-11-29 03:04:04.635535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85993 ] 00:16:50.978 [2024-11-29 03:04:04.779971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:50.978 [2024-11-29 03:04:04.805231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.978 [2024-11-29 03:04:04.805523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.978 [2024-11-29 03:04:04.805574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:50.978 { 00:16:50.978 "name": "nvme0n1", 00:16:50.978 "aliases": [ 00:16:50.978 "7db7e4c7-89fd-4da7-aaef-7c88d55484e4" 00:16:50.978 ], 00:16:50.978 "product_name": "NVMe disk", 00:16:50.978 "block_size": 4096, 00:16:50.978 "num_blocks": 1310720, 00:16:50.978 "uuid": "7db7e4c7-89fd-4da7-aaef-7c88d55484e4", 00:16:50.978 "numa_id": -1, 00:16:50.978 "assigned_rate_limits": { 00:16:50.978 "rw_ios_per_sec": 0, 00:16:50.978 "rw_mbytes_per_sec": 0, 00:16:50.978 "r_mbytes_per_sec": 0, 00:16:50.978 "w_mbytes_per_sec": 0 00:16:50.978 }, 00:16:50.978 "claimed": false, 00:16:50.978 "zoned": false, 00:16:50.978 "supported_io_types": { 00:16:50.978 "read": true, 00:16:50.978 "write": true, 00:16:50.978 "unmap": true, 00:16:50.978 "flush": true, 00:16:50.978 "reset": true, 00:16:50.978 "nvme_admin": true, 00:16:50.978 "nvme_io": true, 00:16:50.978 "nvme_io_md": false, 00:16:50.978 "write_zeroes": true, 00:16:50.978 "zcopy": false, 00:16:50.978 "get_zone_info": false, 00:16:50.978 "zone_management": false, 00:16:50.978 "zone_append": false, 00:16:50.978 "compare": true, 00:16:50.978 "compare_and_write": false, 00:16:50.978 "abort": true, 00:16:50.978 "seek_hole": false, 00:16:50.978 "seek_data": false, 00:16:50.978 "copy": true, 00:16:50.978 "nvme_iov_md": false 00:16:50.978 }, 00:16:50.978 "driver_specific": { 00:16:50.978 "nvme": [ 00:16:50.978 { 00:16:50.978 "pci_address": "0000:00:11.0", 00:16:50.978 "trid": { 00:16:50.978 "trtype": "PCIe", 00:16:50.978 "traddr": "0000:00:11.0" 00:16:50.978 }, 00:16:50.978 "ctrlr_data": { 00:16:50.978 "cntlid": 0, 00:16:50.978 "vendor_id": "0x1b36", 00:16:50.978 "model_number": "QEMU NVMe Ctrl", 00:16:50.978 "serial_number": "12341", 00:16:50.978 "firmware_revision": "8.0.0", 00:16:50.978 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:50.978 "oacs": { 00:16:50.978 "security": 0, 00:16:50.978 "format": 1, 00:16:50.978 "firmware": 0, 00:16:50.978 "ns_manage": 1 00:16:50.978 }, 00:16:50.978 "multi_ctrlr": false, 00:16:50.978 "ana_reporting": false 00:16:50.978 }, 00:16:50.978 "vs": { 00:16:50.978 "nvme_version": "1.4" 00:16:50.978 }, 00:16:50.978 "ns_data": { 00:16:50.978 "id": 1, 00:16:50.978 "can_share": false 00:16:50.978 } 00:16:50.978 } 00:16:50.978 ], 00:16:50.978 "mp_policy": "active_passive" 00:16:50.978 } 00:16:50.978 } 00:16:50.978 ]' 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:50.978 03:04:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:50.978 03:04:06 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:50.978 03:04:06 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:50.978 03:04:06 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=03f91550-adaa-4153-b95d-c86b362bd15b 00:16:50.978 03:04:06 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 03f91550-adaa-4153-b95d-c86b362bd15b 00:16:50.978 03:04:06 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=12f45e8d-31fd-4337-bf4b-4ac9d825b7f2 00:16:50.978 03:04:06 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 12f45e8d-31fd-4337-bf4b-4ac9d825b7f2 00:16:50.978 03:04:06 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:50.978 03:04:06 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:50.978 03:04:06 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=12f45e8d-31fd-4337-bf4b-4ac9d825b7f2 00:16:50.978 03:04:06 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 12f45e8d-31fd-4337-bf4b-4ac9d825b7f2 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=12f45e8d-31fd-4337-bf4b-4ac9d825b7f2 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 12f45e8d-31fd-4337-bf4b-4ac9d825b7f2 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:50.979 { 00:16:50.979 "name": "12f45e8d-31fd-4337-bf4b-4ac9d825b7f2", 00:16:50.979 "aliases": [ 00:16:50.979 "lvs/nvme0n1p0" 00:16:50.979 ], 00:16:50.979 "product_name": "Logical Volume", 00:16:50.979 "block_size": 4096, 00:16:50.979 "num_blocks": 26476544, 00:16:50.979 "uuid": "12f45e8d-31fd-4337-bf4b-4ac9d825b7f2", 00:16:50.979 "assigned_rate_limits": { 00:16:50.979 "rw_ios_per_sec": 0, 00:16:50.979 "rw_mbytes_per_sec": 0, 00:16:50.979 "r_mbytes_per_sec": 0, 00:16:50.979 "w_mbytes_per_sec": 0 00:16:50.979 }, 00:16:50.979 "claimed": false, 00:16:50.979 "zoned": false, 00:16:50.979 "supported_io_types": { 00:16:50.979 "read": true, 00:16:50.979 "write": true, 00:16:50.979 "unmap": true, 00:16:50.979 "flush": false, 00:16:50.979 "reset": true, 00:16:50.979 "nvme_admin": false, 00:16:50.979 "nvme_io": false, 00:16:50.979 "nvme_io_md": false, 00:16:50.979 "write_zeroes": true, 00:16:50.979 "zcopy": false, 00:16:50.979 "get_zone_info": false, 00:16:50.979 "zone_management": false, 00:16:50.979 "zone_append": false, 00:16:50.979 "compare": false, 00:16:50.979 "compare_and_write": false, 00:16:50.979 "abort": false, 00:16:50.979 "seek_hole": true, 00:16:50.979 "seek_data": true, 00:16:50.979 "copy": false, 00:16:50.979 "nvme_iov_md": false 00:16:50.979 }, 00:16:50.979 "driver_specific": { 00:16:50.979 "lvol": { 00:16:50.979 "lvol_store_uuid": "03f91550-adaa-4153-b95d-c86b362bd15b", 00:16:50.979 "base_bdev": "nvme0n1", 00:16:50.979 "thin_provision": true, 00:16:50.979 "num_allocated_clusters": 0, 00:16:50.979 "snapshot": false, 00:16:50.979 "clone": false, 00:16:50.979 "esnap_clone": false 00:16:50.979 } 00:16:50.979 } 00:16:50.979 } 00:16:50.979 ]' 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:50.979 03:04:06 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:51.238 03:04:07 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:51.238 03:04:07 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:51.238 03:04:07 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 12f45e8d-31fd-4337-bf4b-4ac9d825b7f2 00:16:51.238 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=12f45e8d-31fd-4337-bf4b-4ac9d825b7f2 00:16:51.238 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:51.238 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:16:51.238 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:16:51.238 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 12f45e8d-31fd-4337-bf4b-4ac9d825b7f2 00:16:51.238 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:51.238 { 00:16:51.238 "name": "12f45e8d-31fd-4337-bf4b-4ac9d825b7f2", 00:16:51.238 "aliases": [ 00:16:51.238 "lvs/nvme0n1p0" 00:16:51.238 ], 00:16:51.238 "product_name": "Logical Volume", 00:16:51.238 "block_size": 4096, 00:16:51.238 "num_blocks": 26476544, 00:16:51.238 "uuid": "12f45e8d-31fd-4337-bf4b-4ac9d825b7f2", 00:16:51.238 "assigned_rate_limits": { 00:16:51.238 "rw_ios_per_sec": 0, 00:16:51.238 "rw_mbytes_per_sec": 0, 00:16:51.238 "r_mbytes_per_sec": 0, 00:16:51.238 "w_mbytes_per_sec": 0 00:16:51.238 }, 00:16:51.238 "claimed": false, 00:16:51.238 "zoned": false, 00:16:51.238 "supported_io_types": { 00:16:51.238 "read": true, 00:16:51.238 "write": true, 00:16:51.238 "unmap": true, 00:16:51.238 "flush": false, 00:16:51.238 "reset": true, 00:16:51.238 "nvme_admin": false, 00:16:51.238 "nvme_io": false, 00:16:51.238 "nvme_io_md": false, 00:16:51.238 "write_zeroes": true, 00:16:51.238 "zcopy": false, 00:16:51.238 "get_zone_info": false, 00:16:51.238 "zone_management": false, 00:16:51.238 "zone_append": false, 00:16:51.238 "compare": false, 00:16:51.238 "compare_and_write": false, 00:16:51.238 "abort": false, 00:16:51.238 "seek_hole": true, 00:16:51.238 "seek_data": true, 00:16:51.238 "copy": false, 00:16:51.238 "nvme_iov_md": false 00:16:51.238 }, 00:16:51.238 "driver_specific": { 00:16:51.238 "lvol": { 00:16:51.238 "lvol_store_uuid": "03f91550-adaa-4153-b95d-c86b362bd15b", 00:16:51.238 "base_bdev": "nvme0n1", 00:16:51.238 "thin_provision": true, 00:16:51.238 "num_allocated_clusters": 0, 00:16:51.238 "snapshot": false, 00:16:51.238 "clone": false, 00:16:51.238 "esnap_clone": false 00:16:51.238 } 00:16:51.238 } 00:16:51.238 } 00:16:51.238 ]' 00:16:51.238 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:51.497 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 12f45e8d-31fd-4337-bf4b-4ac9d825b7f2 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=12f45e8d-31fd-4337-bf4b-4ac9d825b7f2 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:16:51.497 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 12f45e8d-31fd-4337-bf4b-4ac9d825b7f2 00:16:51.756 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:51.756 { 00:16:51.756 "name": "12f45e8d-31fd-4337-bf4b-4ac9d825b7f2", 00:16:51.756 "aliases": [ 00:16:51.756 "lvs/nvme0n1p0" 00:16:51.756 ], 00:16:51.756 "product_name": "Logical Volume", 00:16:51.756 "block_size": 4096, 00:16:51.756 "num_blocks": 26476544, 00:16:51.756 "uuid": "12f45e8d-31fd-4337-bf4b-4ac9d825b7f2", 00:16:51.756 "assigned_rate_limits": { 00:16:51.756 "rw_ios_per_sec": 0, 00:16:51.756 "rw_mbytes_per_sec": 0, 00:16:51.756 "r_mbytes_per_sec": 0, 00:16:51.756 "w_mbytes_per_sec": 0 00:16:51.756 }, 00:16:51.756 "claimed": false, 00:16:51.756 "zoned": false, 00:16:51.756 "supported_io_types": { 00:16:51.756 "read": true, 00:16:51.756 "write": true, 00:16:51.756 "unmap": true, 00:16:51.756 "flush": false, 00:16:51.756 "reset": true, 00:16:51.756 "nvme_admin": false, 00:16:51.756 "nvme_io": false, 00:16:51.756 "nvme_io_md": false, 00:16:51.756 "write_zeroes": true, 00:16:51.756 "zcopy": false, 00:16:51.756 "get_zone_info": false, 00:16:51.756 "zone_management": false, 00:16:51.756 "zone_append": false, 00:16:51.756 "compare": false, 00:16:51.756 "compare_and_write": false, 00:16:51.756 "abort": false, 00:16:51.756 "seek_hole": true, 00:16:51.756 "seek_data": true, 00:16:51.756 "copy": false, 00:16:51.756 "nvme_iov_md": false 00:16:51.756 }, 00:16:51.756 "driver_specific": { 00:16:51.756 "lvol": { 00:16:51.756 "lvol_store_uuid": "03f91550-adaa-4153-b95d-c86b362bd15b", 00:16:51.756 "base_bdev": "nvme0n1", 00:16:51.756 "thin_provision": true, 00:16:51.756 "num_allocated_clusters": 0, 00:16:51.756 "snapshot": false, 00:16:51.756 "clone": false, 00:16:51.756 "esnap_clone": false 00:16:51.756 } 00:16:51.756 } 00:16:51.756 } 00:16:51.756 ]' 00:16:51.756 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:51.756 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:16:51.756 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:51.756 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:51.756 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:51.756 03:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:16:51.756 03:04:07 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:51.756 03:04:07 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:51.756 03:04:07 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 12f45e8d-31fd-4337-bf4b-4ac9d825b7f2 -c nvc0n1p0 --l2p_dram_limit 60 00:16:52.016 [2024-11-29 03:04:07.926993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.016 [2024-11-29 03:04:07.927116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:52.016 [2024-11-29 03:04:07.927132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:52.016 [2024-11-29 03:04:07.927141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.016 [2024-11-29 03:04:07.927201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.016 [2024-11-29 03:04:07.927210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:52.016 [2024-11-29 03:04:07.927227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:52.016 [2024-11-29 03:04:07.927236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.016 [2024-11-29 03:04:07.927268] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:52.016 [2024-11-29 03:04:07.927477] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:52.016 [2024-11-29 03:04:07.927489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.016 [2024-11-29 03:04:07.927497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:52.016 [2024-11-29 03:04:07.927504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:16:52.016 [2024-11-29 03:04:07.927512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.016 [2024-11-29 03:04:07.927575] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ef9b65e1-b8b8-4c3c-8ecc-074ce9413770 00:16:52.016 [2024-11-29 03:04:07.928816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.016 [2024-11-29 03:04:07.928857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:52.016 [2024-11-29 03:04:07.928868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:52.016 [2024-11-29 03:04:07.928875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.016 [2024-11-29 03:04:07.935671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.016 [2024-11-29 03:04:07.935702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:52.016 [2024-11-29 03:04:07.935712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.733 ms 00:16:52.016 [2024-11-29 03:04:07.935718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.016 [2024-11-29 03:04:07.935819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.016 [2024-11-29 03:04:07.935840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:52.016 [2024-11-29 03:04:07.935850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:52.016 [2024-11-29 03:04:07.935864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.016 [2024-11-29 03:04:07.935916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.016 [2024-11-29 03:04:07.935924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:52.016 [2024-11-29 03:04:07.935932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:52.016 [2024-11-29 03:04:07.935940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.016 [2024-11-29 03:04:07.935970] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:52.016 [2024-11-29 03:04:07.937578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.016 [2024-11-29 03:04:07.937691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:52.016 [2024-11-29 03:04:07.937712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:16:52.016 [2024-11-29 03:04:07.937720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.016 [2024-11-29 03:04:07.937754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.016 [2024-11-29 03:04:07.937762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:52.016 [2024-11-29 03:04:07.937771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:52.016 [2024-11-29 03:04:07.937781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.016 [2024-11-29 03:04:07.937813] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:52.016 [2024-11-29 03:04:07.937945] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:52.016 [2024-11-29 03:04:07.937955] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:52.016 [2024-11-29 03:04:07.937968] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:52.016 [2024-11-29 03:04:07.937976] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:52.016 [2024-11-29 03:04:07.937984] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:52.016 [2024-11-29 03:04:07.937990] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:52.016 [2024-11-29 03:04:07.937997] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:52.016 [2024-11-29 03:04:07.938003] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:52.016 [2024-11-29 03:04:07.938011] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:52.016 [2024-11-29 03:04:07.938018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.016 [2024-11-29 03:04:07.938026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:52.016 [2024-11-29 03:04:07.938044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:16:52.016 [2024-11-29 03:04:07.938051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.016 [2024-11-29 03:04:07.938133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.016 [2024-11-29 03:04:07.938151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:52.016 [2024-11-29 03:04:07.938165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:52.016 [2024-11-29 03:04:07.938172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.016 [2024-11-29 03:04:07.938269] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:52.016 [2024-11-29 03:04:07.938279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:52.016 [2024-11-29 03:04:07.938286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:52.016 [2024-11-29 03:04:07.938305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.016 [2024-11-29 03:04:07.938312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:52.016 [2024-11-29 03:04:07.938319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:52.016 [2024-11-29 03:04:07.938325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:52.016 [2024-11-29 03:04:07.938333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:52.016 [2024-11-29 03:04:07.938339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:52.016 [2024-11-29 03:04:07.938346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:52.016 [2024-11-29 03:04:07.938352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:52.016 [2024-11-29 03:04:07.938360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:52.016 [2024-11-29 03:04:07.938373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:52.016 [2024-11-29 03:04:07.938383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:52.016 [2024-11-29 03:04:07.938389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:52.016 [2024-11-29 03:04:07.938397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.016 [2024-11-29 03:04:07.938403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:52.016 [2024-11-29 03:04:07.938411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:52.016 [2024-11-29 03:04:07.938417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.016 [2024-11-29 03:04:07.938425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:52.016 [2024-11-29 03:04:07.938431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:52.016 [2024-11-29 03:04:07.938439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.016 [2024-11-29 03:04:07.938445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:52.016 [2024-11-29 03:04:07.938452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:52.016 [2024-11-29 03:04:07.938458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.016 [2024-11-29 03:04:07.938466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:52.016 [2024-11-29 03:04:07.938471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:52.016 [2024-11-29 03:04:07.938478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.016 [2024-11-29 03:04:07.938484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:52.016 [2024-11-29 03:04:07.938495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:52.016 [2024-11-29 03:04:07.938501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.016 [2024-11-29 03:04:07.938508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:52.016 [2024-11-29 03:04:07.938514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:52.016 [2024-11-29 03:04:07.938522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:52.016 [2024-11-29 03:04:07.938527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:52.016 [2024-11-29 03:04:07.938535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:52.016 [2024-11-29 03:04:07.938540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:52.016 [2024-11-29 03:04:07.938548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:52.017 [2024-11-29 03:04:07.938554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:52.017 [2024-11-29 03:04:07.938561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.017 [2024-11-29 03:04:07.938567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:52.017 [2024-11-29 03:04:07.938575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:52.017 [2024-11-29 03:04:07.938580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.017 [2024-11-29 03:04:07.938587] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:52.017 [2024-11-29 03:04:07.938598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:52.017 [2024-11-29 03:04:07.938607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:52.017 [2024-11-29 03:04:07.938614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.017 [2024-11-29 03:04:07.938623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:52.017 [2024-11-29 03:04:07.938629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:52.017 [2024-11-29 03:04:07.938637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:52.017 [2024-11-29 03:04:07.938643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:52.017 [2024-11-29 03:04:07.938651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:52.017 [2024-11-29 03:04:07.938656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:52.017 [2024-11-29 03:04:07.938668] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:52.017 [2024-11-29 03:04:07.938674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:52.017 [2024-11-29 03:04:07.938682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:52.017 [2024-11-29 03:04:07.938688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:52.017 [2024-11-29 03:04:07.938694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:52.017 [2024-11-29 03:04:07.938700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:52.017 [2024-11-29 03:04:07.938706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:52.017 [2024-11-29 03:04:07.938712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:52.017 [2024-11-29 03:04:07.938720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:52.017 [2024-11-29 03:04:07.938725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:52.017 [2024-11-29 03:04:07.938732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:52.017 [2024-11-29 03:04:07.938737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:52.017 [2024-11-29 03:04:07.938744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:52.017 [2024-11-29 03:04:07.938749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:52.017 [2024-11-29 03:04:07.938755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:52.017 [2024-11-29 03:04:07.938761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:52.017 [2024-11-29 03:04:07.938767] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:52.017 [2024-11-29 03:04:07.938774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:52.017 [2024-11-29 03:04:07.938782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:52.017 [2024-11-29 03:04:07.938787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:52.017 [2024-11-29 03:04:07.938794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:52.017 [2024-11-29 03:04:07.938800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:52.017 [2024-11-29 03:04:07.938817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.017 [2024-11-29 03:04:07.938836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:52.017 [2024-11-29 03:04:07.938846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:16:52.017 [2024-11-29 03:04:07.938852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.017 [2024-11-29 03:04:07.938910] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:52.017 [2024-11-29 03:04:07.938927] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:53.918 [2024-11-29 03:04:09.907689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.918 [2024-11-29 03:04:09.907729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:53.918 [2024-11-29 03:04:09.907752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1968.770 ms 00:16:53.918 [2024-11-29 03:04:09.907759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:09.917787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:09.917824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:54.178 [2024-11-29 03:04:09.917847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.957 ms 00:16:54.178 [2024-11-29 03:04:09.917854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:09.917932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:09.917939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:54.178 [2024-11-29 03:04:09.917947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:54.178 [2024-11-29 03:04:09.917953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:09.936047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:09.936084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:54.178 [2024-11-29 03:04:09.936095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.045 ms 00:16:54.178 [2024-11-29 03:04:09.936102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:09.936139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:09.936147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:54.178 [2024-11-29 03:04:09.936155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:54.178 [2024-11-29 03:04:09.936161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:09.936587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:09.936604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:54.178 [2024-11-29 03:04:09.936625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:16:54.178 [2024-11-29 03:04:09.936631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:09.936747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:09.936755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:54.178 [2024-11-29 03:04:09.936764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:16:54.178 [2024-11-29 03:04:09.936771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:09.944095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:09.944132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:54.178 [2024-11-29 03:04:09.944149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.293 ms 00:16:54.178 [2024-11-29 03:04:09.944171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:09.953230] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:54.178 [2024-11-29 03:04:09.968441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:09.968471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:54.178 [2024-11-29 03:04:09.968480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.165 ms 00:16:54.178 [2024-11-29 03:04:09.968490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:10.006510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:10.006551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:54.178 [2024-11-29 03:04:10.006561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.992 ms 00:16:54.178 [2024-11-29 03:04:10.006572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:10.006738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:10.006759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:54.178 [2024-11-29 03:04:10.006767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:16:54.178 [2024-11-29 03:04:10.006775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:10.009633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:10.009774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:54.178 [2024-11-29 03:04:10.009821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.821 ms 00:16:54.178 [2024-11-29 03:04:10.009877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:10.012013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:10.012097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:54.178 [2024-11-29 03:04:10.012142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.054 ms 00:16:54.178 [2024-11-29 03:04:10.012179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:10.012477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:10.012529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:54.178 [2024-11-29 03:04:10.012567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:16:54.178 [2024-11-29 03:04:10.012580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:10.035964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:10.036057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:54.178 [2024-11-29 03:04:10.036100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.352 ms 00:16:54.178 [2024-11-29 03:04:10.036139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:10.039999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:10.040085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:54.178 [2024-11-29 03:04:10.040125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.740 ms 00:16:54.178 [2024-11-29 03:04:10.040165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:10.043102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:10.043184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:54.178 [2024-11-29 03:04:10.043224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.851 ms 00:16:54.178 [2024-11-29 03:04:10.043263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:10.045964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:10.046041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:54.178 [2024-11-29 03:04:10.046081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.637 ms 00:16:54.178 [2024-11-29 03:04:10.046119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:10.046185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:10.046224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:54.178 [2024-11-29 03:04:10.046257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:54.178 [2024-11-29 03:04:10.046287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:10.046378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.178 [2024-11-29 03:04:10.046424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:54.178 [2024-11-29 03:04:10.046458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:54.178 [2024-11-29 03:04:10.046493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.178 [2024-11-29 03:04:10.047438] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2120.052 ms, result 0 00:16:54.178 { 00:16:54.178 "name": "ftl0", 00:16:54.178 "uuid": "ef9b65e1-b8b8-4c3c-8ecc-074ce9413770" 00:16:54.178 } 00:16:54.178 03:04:10 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:54.178 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:16:54.178 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:54.178 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:16:54.178 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:54.178 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:54.178 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:54.437 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:54.697 [ 00:16:54.697 { 00:16:54.697 "name": "ftl0", 00:16:54.697 "aliases": [ 00:16:54.697 "ef9b65e1-b8b8-4c3c-8ecc-074ce9413770" 00:16:54.697 ], 00:16:54.697 "product_name": "FTL disk", 00:16:54.697 "block_size": 4096, 00:16:54.697 "num_blocks": 20971520, 00:16:54.697 "uuid": "ef9b65e1-b8b8-4c3c-8ecc-074ce9413770", 00:16:54.697 "assigned_rate_limits": { 00:16:54.697 "rw_ios_per_sec": 0, 00:16:54.697 "rw_mbytes_per_sec": 0, 00:16:54.697 "r_mbytes_per_sec": 0, 00:16:54.697 "w_mbytes_per_sec": 0 00:16:54.697 }, 00:16:54.697 "claimed": false, 00:16:54.697 "zoned": false, 00:16:54.697 "supported_io_types": { 00:16:54.697 "read": true, 00:16:54.697 "write": true, 00:16:54.697 "unmap": true, 00:16:54.697 "flush": true, 00:16:54.697 "reset": false, 00:16:54.697 "nvme_admin": false, 00:16:54.697 "nvme_io": false, 00:16:54.697 "nvme_io_md": false, 00:16:54.697 "write_zeroes": true, 00:16:54.697 "zcopy": false, 00:16:54.697 "get_zone_info": false, 00:16:54.697 "zone_management": false, 00:16:54.697 "zone_append": false, 00:16:54.697 "compare": false, 00:16:54.697 "compare_and_write": false, 00:16:54.697 "abort": false, 00:16:54.697 "seek_hole": false, 00:16:54.697 "seek_data": false, 00:16:54.697 "copy": false, 00:16:54.697 "nvme_iov_md": false 00:16:54.697 }, 00:16:54.697 "driver_specific": { 00:16:54.697 "ftl": { 00:16:54.697 "base_bdev": "12f45e8d-31fd-4337-bf4b-4ac9d825b7f2", 00:16:54.697 "cache": "nvc0n1p0" 00:16:54.697 } 00:16:54.697 } 00:16:54.697 } 00:16:54.697 ] 00:16:54.697 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:16:54.697 03:04:10 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:54.697 03:04:10 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:54.697 03:04:10 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:54.697 03:04:10 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:54.957 [2024-11-29 03:04:10.812242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.957 [2024-11-29 03:04:10.812273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:54.957 [2024-11-29 03:04:10.812284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:54.957 [2024-11-29 03:04:10.812291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.957 [2024-11-29 03:04:10.812324] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:54.957 [2024-11-29 03:04:10.812862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.957 [2024-11-29 03:04:10.812879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:54.957 [2024-11-29 03:04:10.812887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:16:54.957 [2024-11-29 03:04:10.812895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.957 [2024-11-29 03:04:10.813278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.957 [2024-11-29 03:04:10.813295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:54.957 [2024-11-29 03:04:10.813303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:16:54.957 [2024-11-29 03:04:10.813311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.957 [2024-11-29 03:04:10.815728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.957 [2024-11-29 03:04:10.815750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:54.957 [2024-11-29 03:04:10.815760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.392 ms 00:16:54.957 [2024-11-29 03:04:10.815769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.957 [2024-11-29 03:04:10.820505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.957 [2024-11-29 03:04:10.820527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:54.957 [2024-11-29 03:04:10.820535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.715 ms 00:16:54.957 [2024-11-29 03:04:10.820543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.957 [2024-11-29 03:04:10.822140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.957 [2024-11-29 03:04:10.822172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:54.957 [2024-11-29 03:04:10.822179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.516 ms 00:16:54.957 [2024-11-29 03:04:10.822186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.957 [2024-11-29 03:04:10.826271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.957 [2024-11-29 03:04:10.826303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:54.957 [2024-11-29 03:04:10.826311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.049 ms 00:16:54.957 [2024-11-29 03:04:10.826318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.957 [2024-11-29 03:04:10.826449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.957 [2024-11-29 03:04:10.826463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:54.957 [2024-11-29 03:04:10.826470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:16:54.957 [2024-11-29 03:04:10.826478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.957 [2024-11-29 03:04:10.827750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.957 [2024-11-29 03:04:10.827777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:54.957 [2024-11-29 03:04:10.827784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:16:54.957 [2024-11-29 03:04:10.827950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.957 [2024-11-29 03:04:10.828998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.957 [2024-11-29 03:04:10.829029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:54.957 [2024-11-29 03:04:10.829035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:16:54.957 [2024-11-29 03:04:10.829042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.958 [2024-11-29 03:04:10.829869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.958 [2024-11-29 03:04:10.829894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:54.958 [2024-11-29 03:04:10.829901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:16:54.958 [2024-11-29 03:04:10.829908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.958 [2024-11-29 03:04:10.830668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.958 [2024-11-29 03:04:10.830693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:54.958 [2024-11-29 03:04:10.830699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:16:54.958 [2024-11-29 03:04:10.830706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.958 [2024-11-29 03:04:10.830734] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:54.958 [2024-11-29 03:04:10.830747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.830998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:54.958 [2024-11-29 03:04:10.831304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:54.959 [2024-11-29 03:04:10.831440] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:54.959 [2024-11-29 03:04:10.831446] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef9b65e1-b8b8-4c3c-8ecc-074ce9413770 00:16:54.959 [2024-11-29 03:04:10.831453] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:54.959 [2024-11-29 03:04:10.831459] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:54.959 [2024-11-29 03:04:10.831466] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:54.959 [2024-11-29 03:04:10.831472] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:54.959 [2024-11-29 03:04:10.831478] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:54.959 [2024-11-29 03:04:10.831484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:54.959 [2024-11-29 03:04:10.831492] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:54.959 [2024-11-29 03:04:10.831496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:54.959 [2024-11-29 03:04:10.831503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:54.959 [2024-11-29 03:04:10.831508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.959 [2024-11-29 03:04:10.831515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:54.959 [2024-11-29 03:04:10.831522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:16:54.959 [2024-11-29 03:04:10.831531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.959 [2024-11-29 03:04:10.833255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.959 [2024-11-29 03:04:10.833285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:54.959 [2024-11-29 03:04:10.833293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.687 ms 00:16:54.959 [2024-11-29 03:04:10.833300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.959 [2024-11-29 03:04:10.833407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.959 [2024-11-29 03:04:10.833416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:54.959 [2024-11-29 03:04:10.833425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:16:54.959 [2024-11-29 03:04:10.833432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.959 [2024-11-29 03:04:10.839299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.959 [2024-11-29 03:04:10.839328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:54.959 [2024-11-29 03:04:10.839335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.959 [2024-11-29 03:04:10.839343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.959 [2024-11-29 03:04:10.839404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.959 [2024-11-29 03:04:10.839412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:54.959 [2024-11-29 03:04:10.839420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.959 [2024-11-29 03:04:10.839427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.959 [2024-11-29 03:04:10.839477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.959 [2024-11-29 03:04:10.839489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:54.959 [2024-11-29 03:04:10.839495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.959 [2024-11-29 03:04:10.839502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.959 [2024-11-29 03:04:10.839524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.959 [2024-11-29 03:04:10.839532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:54.959 [2024-11-29 03:04:10.839538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.959 [2024-11-29 03:04:10.839547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.959 [2024-11-29 03:04:10.850534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.959 [2024-11-29 03:04:10.850570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:54.959 [2024-11-29 03:04:10.850579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.959 [2024-11-29 03:04:10.850587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.959 [2024-11-29 03:04:10.859542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.959 [2024-11-29 03:04:10.859578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:54.959 [2024-11-29 03:04:10.859599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.959 [2024-11-29 03:04:10.859608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.959 [2024-11-29 03:04:10.859685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.959 [2024-11-29 03:04:10.859702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:54.959 [2024-11-29 03:04:10.859709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.959 [2024-11-29 03:04:10.859717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.959 [2024-11-29 03:04:10.859769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.959 [2024-11-29 03:04:10.859778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:54.959 [2024-11-29 03:04:10.859784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.959 [2024-11-29 03:04:10.859791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.959 [2024-11-29 03:04:10.859888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.959 [2024-11-29 03:04:10.859901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:54.959 [2024-11-29 03:04:10.859908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.959 [2024-11-29 03:04:10.859914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.959 [2024-11-29 03:04:10.859967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.959 [2024-11-29 03:04:10.859976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:54.959 [2024-11-29 03:04:10.859982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.959 [2024-11-29 03:04:10.859990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.959 [2024-11-29 03:04:10.860033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.959 [2024-11-29 03:04:10.860043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:54.959 [2024-11-29 03:04:10.860049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.959 [2024-11-29 03:04:10.860056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.959 [2024-11-29 03:04:10.860106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.959 [2024-11-29 03:04:10.860115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:54.959 [2024-11-29 03:04:10.860122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.959 [2024-11-29 03:04:10.860129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.959 [2024-11-29 03:04:10.860304] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.010 ms, result 0 00:16:54.959 true 00:16:54.959 03:04:10 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 85993 00:16:54.959 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 85993 ']' 00:16:54.959 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 85993 00:16:54.959 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:16:54.959 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.959 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85993 00:16:54.959 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.959 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.959 killing process with pid 85993 00:16:54.959 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85993' 00:16:54.959 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 85993 00:16:54.959 03:04:10 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 85993 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:03.068 03:04:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:03.068 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:03.068 fio-3.35 00:17:03.068 Starting 1 thread 00:17:07.272 00:17:07.272 test: (groupid=0, jobs=1): err= 0: pid=86155: Fri Nov 29 03:04:23 2024 00:17:07.272 read: IOPS=928, BW=61.7MiB/s (64.7MB/s)(255MiB/4126msec) 00:17:07.272 slat (nsec): min=4107, max=38295, avg=7016.42, stdev=3396.82 00:17:07.272 clat (usec): min=240, max=1489, avg=477.77, stdev=171.44 00:17:07.272 lat (usec): min=244, max=1500, avg=484.79, stdev=173.19 00:17:07.272 clat percentiles (usec): 00:17:07.272 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 314], 20.00th=[ 322], 00:17:07.272 | 30.00th=[ 338], 40.00th=[ 420], 50.00th=[ 453], 60.00th=[ 490], 00:17:07.272 | 70.00th=[ 537], 80.00th=[ 570], 90.00th=[ 685], 95.00th=[ 889], 00:17:07.272 | 99.00th=[ 1004], 99.50th=[ 1074], 99.90th=[ 1254], 99.95th=[ 1450], 00:17:07.272 | 99.99th=[ 1483] 00:17:07.272 write: IOPS=935, BW=62.1MiB/s (65.1MB/s)(256MiB/4122msec); 0 zone resets 00:17:07.272 slat (usec): min=14, max=114, avg=21.04, stdev= 5.42 00:17:07.272 clat (usec): min=270, max=1769, avg=553.59, stdev=196.84 00:17:07.272 lat (usec): min=288, max=1796, avg=574.64, stdev=199.54 00:17:07.272 clat percentiles (usec): 00:17:07.272 | 1.00th=[ 306], 5.00th=[ 318], 10.00th=[ 338], 20.00th=[ 351], 00:17:07.272 | 30.00th=[ 379], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 594], 00:17:07.272 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 865], 95.00th=[ 971], 00:17:07.272 | 99.00th=[ 1090], 99.50th=[ 1156], 99.90th=[ 1467], 99.95th=[ 1532], 00:17:07.272 | 99.99th=[ 1778] 00:17:07.272 bw ( KiB/s): min=39440, max=93432, per=100.00%, avg=63750.00, stdev=17623.23, samples=8 00:17:07.272 iops : min= 580, max= 1374, avg=937.50, stdev=259.17, samples=8 00:17:07.272 lat (usec) : 250=0.03%, 500=49.40%, 750=40.17%, 1000=8.44% 00:17:07.272 lat (msec) : 2=1.96% 00:17:07.272 cpu : usr=99.20%, sys=0.02%, ctx=5, majf=0, minf=1326 00:17:07.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.272 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.272 00:17:07.272 Run status group 0 (all jobs): 00:17:07.272 READ: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=255MiB (267MB), run=4126-4126msec 00:17:07.272 WRITE: bw=62.1MiB/s (65.1MB/s), 62.1MiB/s-62.1MiB/s (65.1MB/s-65.1MB/s), io=256MiB (269MB), run=4122-4122msec 00:17:07.844 ----------------------------------------------------- 00:17:07.844 Suppressions used: 00:17:07.844 count bytes template 00:17:07.845 1 5 /usr/src/fio/parse.c 00:17:07.845 1 8 libtcmalloc_minimal.so 00:17:07.845 1 904 libcrypto.so 00:17:07.845 ----------------------------------------------------- 00:17:07.845 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:07.845 03:04:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:08.104 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:08.104 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:08.104 fio-3.35 00:17:08.104 Starting 2 threads 00:17:34.656 00:17:34.656 first_half: (groupid=0, jobs=1): err= 0: pid=86251: Fri Nov 29 03:04:46 2024 00:17:34.656 read: IOPS=2967, BW=11.6MiB/s (12.2MB/s)(255MiB/21982msec) 00:17:34.656 slat (nsec): min=3196, max=21567, avg=5381.26, stdev=986.01 00:17:34.656 clat (usec): min=521, max=461663, avg=32677.47, stdev=20163.05 00:17:34.656 lat (usec): min=528, max=461673, avg=32682.85, stdev=20163.18 00:17:34.656 clat percentiles (msec): 00:17:34.656 | 1.00th=[ 6], 5.00th=[ 23], 10.00th=[ 27], 20.00th=[ 28], 00:17:34.656 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:17:34.656 | 70.00th=[ 32], 80.00th=[ 33], 90.00th=[ 37], 95.00th=[ 41], 00:17:34.656 | 99.00th=[ 128], 99.50th=[ 150], 99.90th=[ 296], 99.95th=[ 414], 00:17:34.656 | 99.99th=[ 451] 00:17:34.656 write: IOPS=3511, BW=13.7MiB/s (14.4MB/s)(256MiB/18663msec); 0 zone resets 00:17:34.656 slat (usec): min=4, max=1772, avg= 6.97, stdev= 9.27 00:17:34.656 clat (usec): min=383, max=78982, avg=10380.39, stdev=16088.10 00:17:34.656 lat (usec): min=392, max=78987, avg=10387.35, stdev=16088.22 00:17:34.656 clat percentiles (usec): 00:17:34.656 | 1.00th=[ 586], 5.00th=[ 709], 10.00th=[ 807], 20.00th=[ 1074], 00:17:34.656 | 30.00th=[ 2769], 40.00th=[ 3654], 50.00th=[ 4424], 60.00th=[ 5342], 00:17:34.656 | 70.00th=[ 6259], 80.00th=[12256], 90.00th=[29492], 95.00th=[56361], 00:17:34.656 | 99.00th=[64750], 99.50th=[67634], 99.90th=[73925], 99.95th=[77071], 00:17:34.656 | 99.99th=[78119] 00:17:34.656 bw ( KiB/s): min= 960, max=41552, per=84.83%, avg=23831.27, stdev=13151.02, samples=22 00:17:34.656 iops : min= 240, max=10388, avg=5957.82, stdev=3287.75, samples=22 00:17:34.656 lat (usec) : 500=0.02%, 750=3.55%, 1000=5.54% 00:17:34.656 lat (msec) : 2=3.51%, 4=10.21%, 10=16.60%, 20=6.87%, 50=47.55% 00:17:34.656 lat (msec) : 100=5.16%, 250=0.92%, 500=0.06% 00:17:34.656 cpu : usr=99.34%, sys=0.15%, ctx=32, majf=0, minf=5543 00:17:34.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:34.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.656 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:34.656 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:34.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:34.656 second_half: (groupid=0, jobs=1): err= 0: pid=86252: Fri Nov 29 03:04:46 2024 00:17:34.656 read: IOPS=2984, BW=11.7MiB/s (12.2MB/s)(254MiB/21828msec) 00:17:34.656 slat (nsec): min=3015, max=58718, avg=4298.57, stdev=1296.93 00:17:34.656 clat (usec): min=554, max=465172, avg=33287.82, stdev=18160.97 00:17:34.656 lat (usec): min=558, max=465180, avg=33292.12, stdev=18161.11 00:17:34.656 clat percentiles (msec): 00:17:34.656 | 1.00th=[ 4], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 28], 00:17:34.656 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:17:34.656 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 38], 95.00th=[ 43], 00:17:34.656 | 99.00th=[ 126], 99.50th=[ 146], 99.90th=[ 163], 99.95th=[ 321], 00:17:34.656 | 99.99th=[ 464] 00:17:34.656 write: IOPS=4713, BW=18.4MiB/s (19.3MB/s)(256MiB/13904msec); 0 zone resets 00:17:34.656 slat (usec): min=3, max=3425, avg= 6.34, stdev=24.95 00:17:34.656 clat (usec): min=316, max=77886, avg=9522.44, stdev=15748.11 00:17:34.656 lat (usec): min=322, max=77891, avg=9528.78, stdev=15748.36 00:17:34.656 clat percentiles (usec): 00:17:34.656 | 1.00th=[ 627], 5.00th=[ 734], 10.00th=[ 816], 20.00th=[ 979], 00:17:34.656 | 30.00th=[ 1254], 40.00th=[ 2704], 50.00th=[ 3785], 60.00th=[ 4752], 00:17:34.656 | 70.00th=[ 5997], 80.00th=[11731], 90.00th=[19530], 95.00th=[55837], 00:17:34.656 | 99.00th=[64750], 99.50th=[67634], 99.90th=[69731], 99.95th=[74974], 00:17:34.656 | 99.99th=[77071] 00:17:34.656 bw ( KiB/s): min= 1992, max=43056, per=100.00%, avg=30844.18, stdev=12677.32, samples=17 00:17:34.656 iops : min= 498, max=10764, avg=7711.00, stdev=3169.33, samples=17 00:17:34.656 lat (usec) : 500=0.02%, 750=2.97%, 1000=7.77% 00:17:34.657 lat (msec) : 2=6.58%, 4=9.05%, 10=11.90%, 20=8.16%, 50=47.30% 00:17:34.657 lat (msec) : 100=5.25%, 250=0.96%, 500=0.04% 00:17:34.657 cpu : usr=99.20%, sys=0.19%, ctx=29, majf=0, minf=5591 00:17:34.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:34.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.657 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:34.657 issued rwts: total=65142,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:34.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:34.657 00:17:34.657 Run status group 0 (all jobs): 00:17:34.657 READ: bw=23.2MiB/s (24.3MB/s), 11.6MiB/s-11.7MiB/s (12.2MB/s-12.2MB/s), io=509MiB (534MB), run=21828-21982msec 00:17:34.657 WRITE: bw=27.4MiB/s (28.8MB/s), 13.7MiB/s-18.4MiB/s (14.4MB/s-19.3MB/s), io=512MiB (537MB), run=13904-18663msec 00:17:34.657 ----------------------------------------------------- 00:17:34.657 Suppressions used: 00:17:34.657 count bytes template 00:17:34.657 2 10 /usr/src/fio/parse.c 00:17:34.657 3 288 /usr/src/fio/iolog.c 00:17:34.657 1 8 libtcmalloc_minimal.so 00:17:34.657 1 904 libcrypto.so 00:17:34.657 ----------------------------------------------------- 00:17:34.657 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:34.657 03:04:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:34.657 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:34.657 fio-3.35 00:17:34.657 Starting 1 thread 00:17:49.555 00:17:49.555 test: (groupid=0, jobs=1): err= 0: pid=86539: Fri Nov 29 03:05:03 2024 00:17:49.555 read: IOPS=7916, BW=30.9MiB/s (32.4MB/s)(255MiB/8236msec) 00:17:49.555 slat (nsec): min=3139, max=23312, avg=4909.75, stdev=1124.88 00:17:49.555 clat (usec): min=539, max=31934, avg=16158.93, stdev=2253.44 00:17:49.555 lat (usec): min=547, max=31940, avg=16163.84, stdev=2253.49 00:17:49.555 clat percentiles (usec): 00:17:49.555 | 1.00th=[13698], 5.00th=[13960], 10.00th=[14222], 20.00th=[14484], 00:17:49.555 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15795], 60.00th=[16057], 00:17:49.555 | 70.00th=[16319], 80.00th=[16450], 90.00th=[17957], 95.00th=[21627], 00:17:49.555 | 99.00th=[25297], 99.50th=[25822], 99.90th=[27919], 99.95th=[29492], 00:17:49.555 | 99.99th=[31065] 00:17:49.555 write: IOPS=10.7k, BW=41.9MiB/s (43.9MB/s)(256MiB/6114msec); 0 zone resets 00:17:49.555 slat (usec): min=4, max=1923, avg= 8.38, stdev= 9.39 00:17:49.555 clat (usec): min=517, max=63771, avg=11877.20, stdev=14569.00 00:17:49.555 lat (usec): min=524, max=63779, avg=11885.58, stdev=14569.16 00:17:49.555 clat percentiles (usec): 00:17:49.555 | 1.00th=[ 906], 5.00th=[ 1156], 10.00th=[ 1319], 20.00th=[ 1614], 00:17:49.555 | 30.00th=[ 1958], 40.00th=[ 2868], 50.00th=[ 6980], 60.00th=[ 8717], 00:17:49.555 | 70.00th=[11469], 80.00th=[15270], 90.00th=[39060], 95.00th=[46400], 00:17:49.555 | 99.00th=[55837], 99.50th=[57410], 99.90th=[61604], 99.95th=[62129], 00:17:49.555 | 99.99th=[63177] 00:17:49.555 bw ( KiB/s): min= 9472, max=66440, per=94.06%, avg=40329.85, stdev=13919.93, samples=13 00:17:49.555 iops : min= 2368, max=16610, avg=10082.46, stdev=3479.98, samples=13 00:17:49.555 lat (usec) : 750=0.16%, 1000=0.82% 00:17:49.555 lat (msec) : 2=14.49%, 4=5.37%, 10=12.08%, 20=55.44%, 50=10.03% 00:17:49.555 lat (msec) : 100=1.62% 00:17:49.555 cpu : usr=99.02%, sys=0.20%, ctx=110, majf=0, minf=5577 00:17:49.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:49.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:49.555 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:49.555 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:49.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:49.555 00:17:49.555 Run status group 0 (all jobs): 00:17:49.555 READ: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=255MiB (267MB), run=8236-8236msec 00:17:49.555 WRITE: bw=41.9MiB/s (43.9MB/s), 41.9MiB/s-41.9MiB/s (43.9MB/s-43.9MB/s), io=256MiB (268MB), run=6114-6114msec 00:17:49.555 ----------------------------------------------------- 00:17:49.555 Suppressions used: 00:17:49.555 count bytes template 00:17:49.555 1 5 /usr/src/fio/parse.c 00:17:49.555 2 192 /usr/src/fio/iolog.c 00:17:49.555 1 8 libtcmalloc_minimal.so 00:17:49.555 1 904 libcrypto.so 00:17:49.555 ----------------------------------------------------- 00:17:49.555 00:17:49.555 03:05:04 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:49.555 03:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:49.555 03:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:49.555 03:05:04 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:49.555 Remove shared memory files 00:17:49.555 03:05:04 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:49.555 03:05:04 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:49.555 03:05:04 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:49.555 03:05:04 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:49.555 03:05:04 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69031 /dev/shm/spdk_tgt_trace.pid84930 00:17:49.555 03:05:04 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:49.555 03:05:04 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:49.555 00:17:49.555 real 1m0.061s 00:17:49.555 user 2m14.209s 00:17:49.555 sys 0m2.649s 00:17:49.555 03:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.555 03:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:49.555 ************************************ 00:17:49.555 END TEST ftl_fio_basic 00:17:49.555 ************************************ 00:17:49.555 03:05:04 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:49.555 03:05:04 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:49.555 03:05:04 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.555 03:05:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:49.555 ************************************ 00:17:49.555 START TEST ftl_bdevperf 00:17:49.555 ************************************ 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:49.555 * Looking for test storage... 00:17:49.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:17:49.555 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:49.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.556 --rc genhtml_branch_coverage=1 00:17:49.556 --rc genhtml_function_coverage=1 00:17:49.556 --rc genhtml_legend=1 00:17:49.556 --rc geninfo_all_blocks=1 00:17:49.556 --rc geninfo_unexecuted_blocks=1 00:17:49.556 00:17:49.556 ' 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:49.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.556 --rc genhtml_branch_coverage=1 00:17:49.556 --rc genhtml_function_coverage=1 00:17:49.556 --rc genhtml_legend=1 00:17:49.556 --rc geninfo_all_blocks=1 00:17:49.556 --rc geninfo_unexecuted_blocks=1 00:17:49.556 00:17:49.556 ' 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:49.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.556 --rc genhtml_branch_coverage=1 00:17:49.556 --rc genhtml_function_coverage=1 00:17:49.556 --rc genhtml_legend=1 00:17:49.556 --rc geninfo_all_blocks=1 00:17:49.556 --rc geninfo_unexecuted_blocks=1 00:17:49.556 00:17:49.556 ' 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:49.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.556 --rc genhtml_branch_coverage=1 00:17:49.556 --rc genhtml_function_coverage=1 00:17:49.556 --rc genhtml_legend=1 00:17:49.556 --rc geninfo_all_blocks=1 00:17:49.556 --rc geninfo_unexecuted_blocks=1 00:17:49.556 00:17:49.556 ' 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=86778 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 86778 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 86778 ']' 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:49.556 03:05:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:49.556 [2024-11-29 03:05:04.714702] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:17:49.556 [2024-11-29 03:05:04.714865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86778 ] 00:17:49.556 [2024-11-29 03:05:04.861645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.556 [2024-11-29 03:05:04.902728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.830 03:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.830 03:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:17:49.830 03:05:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:49.830 03:05:05 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:49.830 03:05:05 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:49.830 03:05:05 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:49.830 03:05:05 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:49.830 03:05:05 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:50.199 03:05:05 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:50.199 03:05:05 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:50.199 03:05:05 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:50.199 03:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:50.199 03:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:50.199 03:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:17:50.199 03:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:17:50.199 03:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:50.199 03:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:50.200 { 00:17:50.200 "name": "nvme0n1", 00:17:50.200 "aliases": [ 00:17:50.200 "2ca02d95-2128-44f0-9e03-ca5f78812d87" 00:17:50.200 ], 00:17:50.200 "product_name": "NVMe disk", 00:17:50.200 "block_size": 4096, 00:17:50.200 "num_blocks": 1310720, 00:17:50.200 "uuid": "2ca02d95-2128-44f0-9e03-ca5f78812d87", 00:17:50.200 "numa_id": -1, 00:17:50.200 "assigned_rate_limits": { 00:17:50.200 "rw_ios_per_sec": 0, 00:17:50.200 "rw_mbytes_per_sec": 0, 00:17:50.200 "r_mbytes_per_sec": 0, 00:17:50.200 "w_mbytes_per_sec": 0 00:17:50.200 }, 00:17:50.200 "claimed": true, 00:17:50.200 "claim_type": "read_many_write_one", 00:17:50.200 "zoned": false, 00:17:50.200 "supported_io_types": { 00:17:50.200 "read": true, 00:17:50.200 "write": true, 00:17:50.200 "unmap": true, 00:17:50.200 "flush": true, 00:17:50.200 "reset": true, 00:17:50.200 "nvme_admin": true, 00:17:50.200 "nvme_io": true, 00:17:50.200 "nvme_io_md": false, 00:17:50.200 "write_zeroes": true, 00:17:50.200 "zcopy": false, 00:17:50.200 "get_zone_info": false, 00:17:50.200 "zone_management": false, 00:17:50.200 "zone_append": false, 00:17:50.200 "compare": true, 00:17:50.200 "compare_and_write": false, 00:17:50.200 "abort": true, 00:17:50.200 "seek_hole": false, 00:17:50.200 "seek_data": false, 00:17:50.200 "copy": true, 00:17:50.200 "nvme_iov_md": false 00:17:50.200 }, 00:17:50.200 "driver_specific": { 00:17:50.200 "nvme": [ 00:17:50.200 { 00:17:50.200 "pci_address": "0000:00:11.0", 00:17:50.200 "trid": { 00:17:50.200 "trtype": "PCIe", 00:17:50.200 "traddr": "0000:00:11.0" 00:17:50.200 }, 00:17:50.200 "ctrlr_data": { 00:17:50.200 "cntlid": 0, 00:17:50.200 "vendor_id": "0x1b36", 00:17:50.200 "model_number": "QEMU NVMe Ctrl", 00:17:50.200 "serial_number": "12341", 00:17:50.200 "firmware_revision": "8.0.0", 00:17:50.200 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:50.200 "oacs": { 00:17:50.200 "security": 0, 00:17:50.200 "format": 1, 00:17:50.200 "firmware": 0, 00:17:50.200 "ns_manage": 1 00:17:50.200 }, 00:17:50.200 "multi_ctrlr": false, 00:17:50.200 "ana_reporting": false 00:17:50.200 }, 00:17:50.200 "vs": { 00:17:50.200 "nvme_version": "1.4" 00:17:50.200 }, 00:17:50.200 "ns_data": { 00:17:50.200 "id": 1, 00:17:50.200 "can_share": false 00:17:50.200 } 00:17:50.200 } 00:17:50.200 ], 00:17:50.200 "mp_policy": "active_passive" 00:17:50.200 } 00:17:50.200 } 00:17:50.200 ]' 00:17:50.200 03:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:50.200 03:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:17:50.200 03:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:50.200 03:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:50.200 03:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:50.200 03:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:17:50.200 03:05:06 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:50.200 03:05:06 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:50.200 03:05:06 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:50.200 03:05:06 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:50.200 03:05:06 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:50.480 03:05:06 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=03f91550-adaa-4153-b95d-c86b362bd15b 00:17:50.480 03:05:06 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:50.480 03:05:06 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 03f91550-adaa-4153-b95d-c86b362bd15b 00:17:50.741 03:05:06 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:51.003 03:05:06 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=398b8569-26a7-4689-820d-9a6884aa90a3 00:17:51.003 03:05:06 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 398b8569-26a7-4689-820d-9a6884aa90a3 00:17:51.264 03:05:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=dfcd3201-e597-4071-9d0e-3598cf50f198 00:17:51.264 03:05:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dfcd3201-e597-4071-9d0e-3598cf50f198 00:17:51.264 03:05:07 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:51.264 03:05:07 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:51.264 03:05:07 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=dfcd3201-e597-4071-9d0e-3598cf50f198 00:17:51.264 03:05:07 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:51.264 03:05:07 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size dfcd3201-e597-4071-9d0e-3598cf50f198 00:17:51.264 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=dfcd3201-e597-4071-9d0e-3598cf50f198 00:17:51.264 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:51.264 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:17:51.264 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:17:51.264 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dfcd3201-e597-4071-9d0e-3598cf50f198 00:17:51.264 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:51.264 { 00:17:51.264 "name": "dfcd3201-e597-4071-9d0e-3598cf50f198", 00:17:51.264 "aliases": [ 00:17:51.264 "lvs/nvme0n1p0" 00:17:51.264 ], 00:17:51.264 "product_name": "Logical Volume", 00:17:51.264 "block_size": 4096, 00:17:51.264 "num_blocks": 26476544, 00:17:51.264 "uuid": "dfcd3201-e597-4071-9d0e-3598cf50f198", 00:17:51.264 "assigned_rate_limits": { 00:17:51.264 "rw_ios_per_sec": 0, 00:17:51.264 "rw_mbytes_per_sec": 0, 00:17:51.264 "r_mbytes_per_sec": 0, 00:17:51.264 "w_mbytes_per_sec": 0 00:17:51.264 }, 00:17:51.264 "claimed": false, 00:17:51.264 "zoned": false, 00:17:51.264 "supported_io_types": { 00:17:51.264 "read": true, 00:17:51.264 "write": true, 00:17:51.264 "unmap": true, 00:17:51.264 "flush": false, 00:17:51.264 "reset": true, 00:17:51.264 "nvme_admin": false, 00:17:51.264 "nvme_io": false, 00:17:51.264 "nvme_io_md": false, 00:17:51.264 "write_zeroes": true, 00:17:51.264 "zcopy": false, 00:17:51.264 "get_zone_info": false, 00:17:51.264 "zone_management": false, 00:17:51.264 "zone_append": false, 00:17:51.264 "compare": false, 00:17:51.264 "compare_and_write": false, 00:17:51.264 "abort": false, 00:17:51.264 "seek_hole": true, 00:17:51.264 "seek_data": true, 00:17:51.264 "copy": false, 00:17:51.264 "nvme_iov_md": false 00:17:51.264 }, 00:17:51.264 "driver_specific": { 00:17:51.264 "lvol": { 00:17:51.264 "lvol_store_uuid": "398b8569-26a7-4689-820d-9a6884aa90a3", 00:17:51.264 "base_bdev": "nvme0n1", 00:17:51.264 "thin_provision": true, 00:17:51.264 "num_allocated_clusters": 0, 00:17:51.264 "snapshot": false, 00:17:51.264 "clone": false, 00:17:51.264 "esnap_clone": false 00:17:51.264 } 00:17:51.264 } 00:17:51.264 } 00:17:51.264 ]' 00:17:51.264 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:51.524 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:17:51.524 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:51.524 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:51.524 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:51.524 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:17:51.524 03:05:07 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:51.524 03:05:07 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:51.524 03:05:07 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:51.784 03:05:07 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:51.784 03:05:07 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:51.784 03:05:07 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size dfcd3201-e597-4071-9d0e-3598cf50f198 00:17:51.784 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=dfcd3201-e597-4071-9d0e-3598cf50f198 00:17:51.784 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:51.784 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:17:51.784 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:17:51.784 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dfcd3201-e597-4071-9d0e-3598cf50f198 00:17:51.784 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:51.784 { 00:17:51.784 "name": "dfcd3201-e597-4071-9d0e-3598cf50f198", 00:17:51.784 "aliases": [ 00:17:51.784 "lvs/nvme0n1p0" 00:17:51.784 ], 00:17:51.784 "product_name": "Logical Volume", 00:17:51.784 "block_size": 4096, 00:17:51.784 "num_blocks": 26476544, 00:17:51.784 "uuid": "dfcd3201-e597-4071-9d0e-3598cf50f198", 00:17:51.784 "assigned_rate_limits": { 00:17:51.784 "rw_ios_per_sec": 0, 00:17:51.784 "rw_mbytes_per_sec": 0, 00:17:51.784 "r_mbytes_per_sec": 0, 00:17:51.784 "w_mbytes_per_sec": 0 00:17:51.784 }, 00:17:51.784 "claimed": false, 00:17:51.784 "zoned": false, 00:17:51.784 "supported_io_types": { 00:17:51.784 "read": true, 00:17:51.784 "write": true, 00:17:51.784 "unmap": true, 00:17:51.784 "flush": false, 00:17:51.784 "reset": true, 00:17:51.784 "nvme_admin": false, 00:17:51.784 "nvme_io": false, 00:17:51.784 "nvme_io_md": false, 00:17:51.784 "write_zeroes": true, 00:17:51.784 "zcopy": false, 00:17:51.784 "get_zone_info": false, 00:17:51.784 "zone_management": false, 00:17:51.784 "zone_append": false, 00:17:51.784 "compare": false, 00:17:51.784 "compare_and_write": false, 00:17:51.784 "abort": false, 00:17:51.784 "seek_hole": true, 00:17:51.784 "seek_data": true, 00:17:51.784 "copy": false, 00:17:51.784 "nvme_iov_md": false 00:17:51.784 }, 00:17:51.784 "driver_specific": { 00:17:51.784 "lvol": { 00:17:51.784 "lvol_store_uuid": "398b8569-26a7-4689-820d-9a6884aa90a3", 00:17:51.784 "base_bdev": "nvme0n1", 00:17:51.784 "thin_provision": true, 00:17:51.784 "num_allocated_clusters": 0, 00:17:51.784 "snapshot": false, 00:17:51.784 "clone": false, 00:17:51.784 "esnap_clone": false 00:17:51.784 } 00:17:51.784 } 00:17:51.784 } 00:17:51.784 ]' 00:17:51.784 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:52.044 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:17:52.044 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:52.044 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:52.044 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:52.044 03:05:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:17:52.044 03:05:07 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:52.044 03:05:07 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:52.044 03:05:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:17:52.044 03:05:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size dfcd3201-e597-4071-9d0e-3598cf50f198 00:17:52.044 03:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=dfcd3201-e597-4071-9d0e-3598cf50f198 00:17:52.044 03:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:52.044 03:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:17:52.044 03:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:17:52.306 03:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dfcd3201-e597-4071-9d0e-3598cf50f198 00:17:52.306 03:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:52.306 { 00:17:52.306 "name": "dfcd3201-e597-4071-9d0e-3598cf50f198", 00:17:52.306 "aliases": [ 00:17:52.306 "lvs/nvme0n1p0" 00:17:52.306 ], 00:17:52.306 "product_name": "Logical Volume", 00:17:52.306 "block_size": 4096, 00:17:52.306 "num_blocks": 26476544, 00:17:52.306 "uuid": "dfcd3201-e597-4071-9d0e-3598cf50f198", 00:17:52.306 "assigned_rate_limits": { 00:17:52.306 "rw_ios_per_sec": 0, 00:17:52.306 "rw_mbytes_per_sec": 0, 00:17:52.306 "r_mbytes_per_sec": 0, 00:17:52.306 "w_mbytes_per_sec": 0 00:17:52.306 }, 00:17:52.306 "claimed": false, 00:17:52.306 "zoned": false, 00:17:52.306 "supported_io_types": { 00:17:52.306 "read": true, 00:17:52.306 "write": true, 00:17:52.306 "unmap": true, 00:17:52.306 "flush": false, 00:17:52.306 "reset": true, 00:17:52.306 "nvme_admin": false, 00:17:52.306 "nvme_io": false, 00:17:52.306 "nvme_io_md": false, 00:17:52.306 "write_zeroes": true, 00:17:52.306 "zcopy": false, 00:17:52.306 "get_zone_info": false, 00:17:52.306 "zone_management": false, 00:17:52.306 "zone_append": false, 00:17:52.306 "compare": false, 00:17:52.306 "compare_and_write": false, 00:17:52.306 "abort": false, 00:17:52.306 "seek_hole": true, 00:17:52.306 "seek_data": true, 00:17:52.306 "copy": false, 00:17:52.306 "nvme_iov_md": false 00:17:52.306 }, 00:17:52.306 "driver_specific": { 00:17:52.306 "lvol": { 00:17:52.306 "lvol_store_uuid": "398b8569-26a7-4689-820d-9a6884aa90a3", 00:17:52.306 "base_bdev": "nvme0n1", 00:17:52.306 "thin_provision": true, 00:17:52.306 "num_allocated_clusters": 0, 00:17:52.306 "snapshot": false, 00:17:52.306 "clone": false, 00:17:52.306 "esnap_clone": false 00:17:52.306 } 00:17:52.306 } 00:17:52.306 } 00:17:52.306 ]' 00:17:52.306 03:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:52.306 03:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:17:52.306 03:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:52.568 03:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:52.568 03:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:52.568 03:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:17:52.568 03:05:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:17:52.568 03:05:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dfcd3201-e597-4071-9d0e-3598cf50f198 -c nvc0n1p0 --l2p_dram_limit 20 00:17:52.568 [2024-11-29 03:05:08.487858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.568 [2024-11-29 03:05:08.487905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:52.568 [2024-11-29 03:05:08.487920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:52.568 [2024-11-29 03:05:08.487927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.568 [2024-11-29 03:05:08.487962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.568 [2024-11-29 03:05:08.487970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:52.568 [2024-11-29 03:05:08.487980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:52.568 [2024-11-29 03:05:08.487985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.568 [2024-11-29 03:05:08.488002] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:52.568 [2024-11-29 03:05:08.488180] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:52.568 [2024-11-29 03:05:08.488195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.568 [2024-11-29 03:05:08.488204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:52.568 [2024-11-29 03:05:08.488212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:17:52.568 [2024-11-29 03:05:08.488217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.568 [2024-11-29 03:05:08.488240] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 61419efd-3cc3-4c2e-b2a9-35500773c35f 00:17:52.568 [2024-11-29 03:05:08.489527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.568 [2024-11-29 03:05:08.489552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:52.568 [2024-11-29 03:05:08.489561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:52.568 [2024-11-29 03:05:08.489572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.568 [2024-11-29 03:05:08.496464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.568 [2024-11-29 03:05:08.496491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:52.568 [2024-11-29 03:05:08.496499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.854 ms 00:17:52.568 [2024-11-29 03:05:08.496509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.568 [2024-11-29 03:05:08.496597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.568 [2024-11-29 03:05:08.496609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:52.568 [2024-11-29 03:05:08.496620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:52.568 [2024-11-29 03:05:08.496628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.568 [2024-11-29 03:05:08.496661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.568 [2024-11-29 03:05:08.496671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:52.568 [2024-11-29 03:05:08.496677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:52.568 [2024-11-29 03:05:08.496688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.568 [2024-11-29 03:05:08.496706] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:52.568 [2024-11-29 03:05:08.498398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.568 [2024-11-29 03:05:08.498421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:52.568 [2024-11-29 03:05:08.498432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.695 ms 00:17:52.568 [2024-11-29 03:05:08.498438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.568 [2024-11-29 03:05:08.498464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.568 [2024-11-29 03:05:08.498470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:52.568 [2024-11-29 03:05:08.498480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:52.568 [2024-11-29 03:05:08.498486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.568 [2024-11-29 03:05:08.498499] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:52.568 [2024-11-29 03:05:08.498614] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:52.568 [2024-11-29 03:05:08.498626] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:52.568 [2024-11-29 03:05:08.498634] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:52.568 [2024-11-29 03:05:08.498644] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:52.568 [2024-11-29 03:05:08.498651] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:52.568 [2024-11-29 03:05:08.498661] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:52.568 [2024-11-29 03:05:08.498667] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:52.568 [2024-11-29 03:05:08.498675] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:52.568 [2024-11-29 03:05:08.498686] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:52.568 [2024-11-29 03:05:08.498693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.568 [2024-11-29 03:05:08.498699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:52.568 [2024-11-29 03:05:08.498707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:17:52.568 [2024-11-29 03:05:08.498712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.568 [2024-11-29 03:05:08.498776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.568 [2024-11-29 03:05:08.498787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:52.568 [2024-11-29 03:05:08.498794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:52.568 [2024-11-29 03:05:08.498799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.568 [2024-11-29 03:05:08.498888] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:52.568 [2024-11-29 03:05:08.498897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:52.568 [2024-11-29 03:05:08.498908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:52.568 [2024-11-29 03:05:08.498916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.568 [2024-11-29 03:05:08.498924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:52.568 [2024-11-29 03:05:08.498929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:52.568 [2024-11-29 03:05:08.498936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:52.568 [2024-11-29 03:05:08.498942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:52.568 [2024-11-29 03:05:08.498949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:52.569 [2024-11-29 03:05:08.498955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:52.569 [2024-11-29 03:05:08.498962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:52.569 [2024-11-29 03:05:08.498967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:52.569 [2024-11-29 03:05:08.498975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:52.569 [2024-11-29 03:05:08.498981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:52.569 [2024-11-29 03:05:08.498988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:52.569 [2024-11-29 03:05:08.498994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.569 [2024-11-29 03:05:08.499002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:52.569 [2024-11-29 03:05:08.499008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:52.569 [2024-11-29 03:05:08.499015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.569 [2024-11-29 03:05:08.499020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:52.569 [2024-11-29 03:05:08.499027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:52.569 [2024-11-29 03:05:08.499032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.569 [2024-11-29 03:05:08.499039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:52.569 [2024-11-29 03:05:08.499046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:52.569 [2024-11-29 03:05:08.499054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.569 [2024-11-29 03:05:08.499061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:52.569 [2024-11-29 03:05:08.499068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:52.569 [2024-11-29 03:05:08.499074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.569 [2024-11-29 03:05:08.499084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:52.569 [2024-11-29 03:05:08.499090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:52.569 [2024-11-29 03:05:08.499098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.569 [2024-11-29 03:05:08.499105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:52.569 [2024-11-29 03:05:08.499112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:52.569 [2024-11-29 03:05:08.499119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:52.569 [2024-11-29 03:05:08.499126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:52.569 [2024-11-29 03:05:08.499132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:52.569 [2024-11-29 03:05:08.499140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:52.569 [2024-11-29 03:05:08.499147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:52.569 [2024-11-29 03:05:08.499155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:52.569 [2024-11-29 03:05:08.499161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.569 [2024-11-29 03:05:08.499168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:52.569 [2024-11-29 03:05:08.499174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:52.569 [2024-11-29 03:05:08.499181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.569 [2024-11-29 03:05:08.499189] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:52.569 [2024-11-29 03:05:08.499199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:52.569 [2024-11-29 03:05:08.499206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:52.569 [2024-11-29 03:05:08.499215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.569 [2024-11-29 03:05:08.499222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:52.569 [2024-11-29 03:05:08.499232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:52.569 [2024-11-29 03:05:08.499238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:52.569 [2024-11-29 03:05:08.499246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:52.569 [2024-11-29 03:05:08.499252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:52.569 [2024-11-29 03:05:08.499259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:52.569 [2024-11-29 03:05:08.499269] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:52.569 [2024-11-29 03:05:08.499282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:52.569 [2024-11-29 03:05:08.499290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:52.569 [2024-11-29 03:05:08.499298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:52.569 [2024-11-29 03:05:08.499305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:52.569 [2024-11-29 03:05:08.499313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:52.569 [2024-11-29 03:05:08.499320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:52.569 [2024-11-29 03:05:08.499331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:52.569 [2024-11-29 03:05:08.499337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:52.569 [2024-11-29 03:05:08.499350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:52.569 [2024-11-29 03:05:08.499357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:52.569 [2024-11-29 03:05:08.499366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:52.569 [2024-11-29 03:05:08.499373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:52.569 [2024-11-29 03:05:08.499382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:52.569 [2024-11-29 03:05:08.499389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:52.569 [2024-11-29 03:05:08.499399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:52.569 [2024-11-29 03:05:08.499405] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:52.569 [2024-11-29 03:05:08.499416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:52.569 [2024-11-29 03:05:08.499423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:52.569 [2024-11-29 03:05:08.499431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:52.569 [2024-11-29 03:05:08.499436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:52.569 [2024-11-29 03:05:08.499445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:52.569 [2024-11-29 03:05:08.499451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.569 [2024-11-29 03:05:08.499461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:52.569 [2024-11-29 03:05:08.499467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:17:52.569 [2024-11-29 03:05:08.499475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.569 [2024-11-29 03:05:08.499504] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:52.569 [2024-11-29 03:05:08.499520] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:56.768 [2024-11-29 03:05:12.254304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.768 [2024-11-29 03:05:12.254389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:56.768 [2024-11-29 03:05:12.254409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3754.781 ms 00:17:56.768 [2024-11-29 03:05:12.254421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.768 [2024-11-29 03:05:12.268911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.268970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.769 [2024-11-29 03:05:12.268986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.369 ms 00:17:56.769 [2024-11-29 03:05:12.269001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.269115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.269129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:56.769 [2024-11-29 03:05:12.269144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:56.769 [2024-11-29 03:05:12.269155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.289192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.289257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.769 [2024-11-29 03:05:12.289273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.988 ms 00:17:56.769 [2024-11-29 03:05:12.289285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.289329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.289345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.769 [2024-11-29 03:05:12.289356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:56.769 [2024-11-29 03:05:12.289367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.290009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.290060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.769 [2024-11-29 03:05:12.290075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:17:56.769 [2024-11-29 03:05:12.290090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.290233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.290248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.769 [2024-11-29 03:05:12.290262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:17:56.769 [2024-11-29 03:05:12.290273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.299180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.299237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.769 [2024-11-29 03:05:12.299249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.885 ms 00:17:56.769 [2024-11-29 03:05:12.299259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.309937] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:56.769 [2024-11-29 03:05:12.318286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.318339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:56.769 [2024-11-29 03:05:12.318353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.940 ms 00:17:56.769 [2024-11-29 03:05:12.318362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.402679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.402757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:56.769 [2024-11-29 03:05:12.402778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.253 ms 00:17:56.769 [2024-11-29 03:05:12.402792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.403019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.403032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:56.769 [2024-11-29 03:05:12.403044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:17:56.769 [2024-11-29 03:05:12.403053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.409192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.409245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:56.769 [2024-11-29 03:05:12.409260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.095 ms 00:17:56.769 [2024-11-29 03:05:12.409275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.414442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.414492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:56.769 [2024-11-29 03:05:12.414507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.108 ms 00:17:56.769 [2024-11-29 03:05:12.414515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.414877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.414905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:56.769 [2024-11-29 03:05:12.414922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:17:56.769 [2024-11-29 03:05:12.414943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.456139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.456198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:56.769 [2024-11-29 03:05:12.456214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.167 ms 00:17:56.769 [2024-11-29 03:05:12.456222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.463724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.463781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:56.769 [2024-11-29 03:05:12.463795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.411 ms 00:17:56.769 [2024-11-29 03:05:12.463804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.469800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.469863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:56.769 [2024-11-29 03:05:12.469877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.927 ms 00:17:56.769 [2024-11-29 03:05:12.469884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.476509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.476565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:56.769 [2024-11-29 03:05:12.476582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.572 ms 00:17:56.769 [2024-11-29 03:05:12.476590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.476645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.476664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:56.769 [2024-11-29 03:05:12.476676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:56.769 [2024-11-29 03:05:12.476685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.476759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.769 [2024-11-29 03:05:12.476770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:56.769 [2024-11-29 03:05:12.476780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:56.769 [2024-11-29 03:05:12.476788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-29 03:05:12.478073] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3989.663 ms, result 0 00:17:56.769 { 00:17:56.769 "name": "ftl0", 00:17:56.769 "uuid": "61419efd-3cc3-4c2e-b2a9-35500773c35f" 00:17:56.769 } 00:17:56.769 03:05:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:56.769 03:05:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:17:56.769 03:05:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:17:56.769 03:05:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:57.031 [2024-11-29 03:05:12.815750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:57.031 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:57.031 Zero copy mechanism will not be used. 00:17:57.031 Running I/O for 4 seconds... 00:17:58.922 995.00 IOPS, 66.07 MiB/s [2024-11-29T03:05:15.858Z] 848.00 IOPS, 56.31 MiB/s [2024-11-29T03:05:17.244Z] 929.00 IOPS, 61.69 MiB/s [2024-11-29T03:05:17.244Z] 943.00 IOPS, 62.62 MiB/s 00:18:01.252 Latency(us) 00:18:01.252 [2024-11-29T03:05:17.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.252 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:01.252 ftl0 : 4.00 942.63 62.60 0.00 0.00 1119.12 226.86 3377.62 00:18:01.252 [2024-11-29T03:05:17.244Z] =================================================================================================================== 00:18:01.252 [2024-11-29T03:05:17.244Z] Total : 942.63 62.60 0.00 0.00 1119.12 226.86 3377.62 00:18:01.252 [2024-11-29 03:05:16.825632] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:01.252 { 00:18:01.252 "results": [ 00:18:01.252 { 00:18:01.252 "job": "ftl0", 00:18:01.252 "core_mask": "0x1", 00:18:01.252 "workload": "randwrite", 00:18:01.252 "status": "finished", 00:18:01.252 "queue_depth": 1, 00:18:01.252 "io_size": 69632, 00:18:01.252 "runtime": 4.002647, 00:18:01.252 "iops": 942.6262171008335, 00:18:01.252 "mibps": 62.59627222935222, 00:18:01.252 "io_failed": 0, 00:18:01.252 "io_timeout": 0, 00:18:01.252 "avg_latency_us": 1119.1222491793922, 00:18:01.252 "min_latency_us": 226.85538461538462, 00:18:01.252 "max_latency_us": 3377.6246153846155 00:18:01.252 } 00:18:01.252 ], 00:18:01.252 "core_count": 1 00:18:01.252 } 00:18:01.252 03:05:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:01.252 [2024-11-29 03:05:16.938468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:01.252 Running I/O for 4 seconds... 00:18:03.140 6599.00 IOPS, 25.78 MiB/s [2024-11-29T03:05:20.077Z] 5885.00 IOPS, 22.99 MiB/s [2024-11-29T03:05:21.018Z] 5546.67 IOPS, 21.67 MiB/s [2024-11-29T03:05:21.018Z] 5463.50 IOPS, 21.34 MiB/s 00:18:05.026 Latency(us) 00:18:05.026 [2024-11-29T03:05:21.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.026 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.026 ftl0 : 4.01 5480.50 21.41 0.00 0.00 23340.23 346.58 107277.39 00:18:05.026 [2024-11-29T03:05:21.018Z] =================================================================================================================== 00:18:05.026 [2024-11-29T03:05:21.018Z] Total : 5480.50 21.41 0.00 0.00 23340.23 0.00 107277.39 00:18:05.026 [2024-11-29 03:05:20.953653] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:05.026 { 00:18:05.026 "results": [ 00:18:05.026 { 00:18:05.026 "job": "ftl0", 00:18:05.026 "core_mask": "0x1", 00:18:05.026 "workload": "randwrite", 00:18:05.026 "status": "finished", 00:18:05.026 "queue_depth": 128, 00:18:05.026 "io_size": 4096, 00:18:05.026 "runtime": 4.008025, 00:18:05.026 "iops": 5480.50473737065, 00:18:05.026 "mibps": 21.408221630354102, 00:18:05.026 "io_failed": 0, 00:18:05.026 "io_timeout": 0, 00:18:05.026 "avg_latency_us": 23340.226918524433, 00:18:05.026 "min_latency_us": 346.5846153846154, 00:18:05.026 "max_latency_us": 107277.39076923077 00:18:05.026 } 00:18:05.026 ], 00:18:05.026 "core_count": 1 00:18:05.026 } 00:18:05.026 03:05:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:05.286 [2024-11-29 03:05:21.056790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:05.286 Running I/O for 4 seconds... 00:18:07.174 4975.00 IOPS, 19.43 MiB/s [2024-11-29T03:05:24.110Z] 5053.00 IOPS, 19.74 MiB/s [2024-11-29T03:05:25.495Z] 4922.67 IOPS, 19.23 MiB/s [2024-11-29T03:05:25.495Z] 4881.25 IOPS, 19.07 MiB/s 00:18:09.503 Latency(us) 00:18:09.503 [2024-11-29T03:05:25.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.503 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:09.503 Verification LBA range: start 0x0 length 0x1400000 00:18:09.503 ftl0 : 4.01 4893.06 19.11 0.00 0.00 26080.21 318.23 40733.14 00:18:09.503 [2024-11-29T03:05:25.495Z] =================================================================================================================== 00:18:09.503 [2024-11-29T03:05:25.495Z] Total : 4893.06 19.11 0.00 0.00 26080.21 0.00 40733.14 00:18:09.503 [2024-11-29 03:05:25.078658] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:09.503 { 00:18:09.503 "results": [ 00:18:09.503 { 00:18:09.503 "job": "ftl0", 00:18:09.503 "core_mask": "0x1", 00:18:09.503 "workload": "verify", 00:18:09.503 "status": "finished", 00:18:09.503 "verify_range": { 00:18:09.503 "start": 0, 00:18:09.503 "length": 20971520 00:18:09.503 }, 00:18:09.503 "queue_depth": 128, 00:18:09.503 "io_size": 4096, 00:18:09.503 "runtime": 4.014667, 00:18:09.503 "iops": 4893.058378191766, 00:18:09.503 "mibps": 19.113509289811585, 00:18:09.503 "io_failed": 0, 00:18:09.503 "io_timeout": 0, 00:18:09.503 "avg_latency_us": 26080.21318296446, 00:18:09.503 "min_latency_us": 318.2276923076923, 00:18:09.503 "max_latency_us": 40733.14461538461 00:18:09.503 } 00:18:09.503 ], 00:18:09.503 "core_count": 1 00:18:09.503 } 00:18:09.503 03:05:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:09.503 [2024-11-29 03:05:25.295035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.503 [2024-11-29 03:05:25.295083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:09.503 [2024-11-29 03:05:25.295097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:09.503 [2024-11-29 03:05:25.295106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.503 [2024-11-29 03:05:25.295133] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:09.503 [2024-11-29 03:05:25.295636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.503 [2024-11-29 03:05:25.295666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:09.503 [2024-11-29 03:05:25.295675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:18:09.503 [2024-11-29 03:05:25.295685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.503 [2024-11-29 03:05:25.298044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.503 [2024-11-29 03:05:25.298085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:09.503 [2024-11-29 03:05:25.298095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.340 ms 00:18:09.503 [2024-11-29 03:05:25.298107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.765 [2024-11-29 03:05:25.496509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.765 [2024-11-29 03:05:25.496562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:09.765 [2024-11-29 03:05:25.496578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 198.384 ms 00:18:09.765 [2024-11-29 03:05:25.496588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.765 [2024-11-29 03:05:25.502791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.765 [2024-11-29 03:05:25.502838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:09.765 [2024-11-29 03:05:25.502850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.171 ms 00:18:09.765 [2024-11-29 03:05:25.502861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.765 [2024-11-29 03:05:25.505388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.765 [2024-11-29 03:05:25.505431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:09.765 [2024-11-29 03:05:25.505441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.478 ms 00:18:09.765 [2024-11-29 03:05:25.505462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.765 [2024-11-29 03:05:25.510892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.765 [2024-11-29 03:05:25.510945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:09.765 [2024-11-29 03:05:25.510955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.395 ms 00:18:09.765 [2024-11-29 03:05:25.510968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.765 [2024-11-29 03:05:25.511091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.765 [2024-11-29 03:05:25.511105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:09.765 [2024-11-29 03:05:25.511113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:18:09.765 [2024-11-29 03:05:25.511123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.765 [2024-11-29 03:05:25.513700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.765 [2024-11-29 03:05:25.513747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:09.765 [2024-11-29 03:05:25.513758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:18:09.765 [2024-11-29 03:05:25.513768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.765 [2024-11-29 03:05:25.516601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.765 [2024-11-29 03:05:25.516647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:09.765 [2024-11-29 03:05:25.516657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.797 ms 00:18:09.765 [2024-11-29 03:05:25.516666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.765 [2024-11-29 03:05:25.518593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.765 [2024-11-29 03:05:25.518634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:09.765 [2024-11-29 03:05:25.518643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.894 ms 00:18:09.765 [2024-11-29 03:05:25.518655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.765 [2024-11-29 03:05:25.520527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.765 [2024-11-29 03:05:25.520569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:09.765 [2024-11-29 03:05:25.520578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.818 ms 00:18:09.765 [2024-11-29 03:05:25.520590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.765 [2024-11-29 03:05:25.520621] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:09.765 [2024-11-29 03:05:25.520641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.520993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:09.765 [2024-11-29 03:05:25.521143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:09.766 [2024-11-29 03:05:25.521565] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:09.766 [2024-11-29 03:05:25.521573] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 61419efd-3cc3-4c2e-b2a9-35500773c35f 00:18:09.766 [2024-11-29 03:05:25.521595] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:09.766 [2024-11-29 03:05:25.521603] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:09.766 [2024-11-29 03:05:25.521611] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:09.766 [2024-11-29 03:05:25.521619] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:09.766 [2024-11-29 03:05:25.521634] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:09.766 [2024-11-29 03:05:25.521642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:09.766 [2024-11-29 03:05:25.521650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:09.766 [2024-11-29 03:05:25.521657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:09.766 [2024-11-29 03:05:25.521665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:09.766 [2024-11-29 03:05:25.521673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.766 [2024-11-29 03:05:25.521685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:09.766 [2024-11-29 03:05:25.521695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:18:09.766 [2024-11-29 03:05:25.521704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.766 [2024-11-29 03:05:25.523496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.766 [2024-11-29 03:05:25.523536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:09.766 [2024-11-29 03:05:25.523546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:18:09.766 [2024-11-29 03:05:25.523555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.766 [2024-11-29 03:05:25.523645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.766 [2024-11-29 03:05:25.523657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:09.766 [2024-11-29 03:05:25.523666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:09.766 [2024-11-29 03:05:25.523677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.766 [2024-11-29 03:05:25.529789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.766 [2024-11-29 03:05:25.529849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:09.766 [2024-11-29 03:05:25.529859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.766 [2024-11-29 03:05:25.529868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.766 [2024-11-29 03:05:25.529924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.766 [2024-11-29 03:05:25.529937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:09.766 [2024-11-29 03:05:25.529944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.766 [2024-11-29 03:05:25.529953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.766 [2024-11-29 03:05:25.530019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.766 [2024-11-29 03:05:25.530031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:09.766 [2024-11-29 03:05:25.530039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.766 [2024-11-29 03:05:25.530048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.766 [2024-11-29 03:05:25.530066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.766 [2024-11-29 03:05:25.530075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:09.766 [2024-11-29 03:05:25.530084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.766 [2024-11-29 03:05:25.530095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.766 [2024-11-29 03:05:25.540737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.766 [2024-11-29 03:05:25.540786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:09.766 [2024-11-29 03:05:25.540796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.766 [2024-11-29 03:05:25.540806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.766 [2024-11-29 03:05:25.549862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.766 [2024-11-29 03:05:25.549908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:09.766 [2024-11-29 03:05:25.549921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.766 [2024-11-29 03:05:25.549931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.766 [2024-11-29 03:05:25.549994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.766 [2024-11-29 03:05:25.550006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:09.766 [2024-11-29 03:05:25.550019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.766 [2024-11-29 03:05:25.550029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.766 [2024-11-29 03:05:25.550079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.766 [2024-11-29 03:05:25.550091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:09.766 [2024-11-29 03:05:25.550099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.766 [2024-11-29 03:05:25.550113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.766 [2024-11-29 03:05:25.550179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.766 [2024-11-29 03:05:25.550191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:09.766 [2024-11-29 03:05:25.550202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.766 [2024-11-29 03:05:25.550212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.767 [2024-11-29 03:05:25.550242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.767 [2024-11-29 03:05:25.550253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:09.767 [2024-11-29 03:05:25.550261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.767 [2024-11-29 03:05:25.550270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.767 [2024-11-29 03:05:25.550316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.767 [2024-11-29 03:05:25.550327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:09.767 [2024-11-29 03:05:25.550338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.767 [2024-11-29 03:05:25.550348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.767 [2024-11-29 03:05:25.550390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.767 [2024-11-29 03:05:25.550408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:09.767 [2024-11-29 03:05:25.550417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.767 [2024-11-29 03:05:25.550430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.767 [2024-11-29 03:05:25.550556] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 255.487 ms, result 0 00:18:09.767 true 00:18:09.767 03:05:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 86778 00:18:09.767 03:05:25 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 86778 ']' 00:18:09.767 03:05:25 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 86778 00:18:09.767 03:05:25 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:18:09.767 03:05:25 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.767 03:05:25 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86778 00:18:09.767 killing process with pid 86778 00:18:09.767 Received shutdown signal, test time was about 4.000000 seconds 00:18:09.767 00:18:09.767 Latency(us) 00:18:09.767 [2024-11-29T03:05:25.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.767 [2024-11-29T03:05:25.759Z] =================================================================================================================== 00:18:09.767 [2024-11-29T03:05:25.759Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:09.767 03:05:25 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.767 03:05:25 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.767 03:05:25 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86778' 00:18:09.767 03:05:25 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 86778 00:18:09.767 03:05:25 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 86778 00:18:10.028 Remove shared memory files 00:18:10.028 03:05:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:10.028 03:05:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:18:10.028 03:05:25 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:10.028 03:05:25 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:10.028 03:05:25 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:10.028 03:05:25 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:10.028 03:05:25 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:10.028 03:05:25 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:10.028 ************************************ 00:18:10.028 END TEST ftl_bdevperf 00:18:10.028 ************************************ 00:18:10.028 00:18:10.028 real 0m21.377s 00:18:10.028 user 0m24.005s 00:18:10.028 sys 0m0.950s 00:18:10.028 03:05:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.028 03:05:25 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:10.028 03:05:25 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:10.028 03:05:25 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:10.028 03:05:25 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.028 03:05:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:10.028 ************************************ 00:18:10.028 START TEST ftl_trim 00:18:10.028 ************************************ 00:18:10.028 03:05:25 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:10.028 * Looking for test storage... 00:18:10.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:10.028 03:05:26 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:10.028 03:05:26 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:10.028 03:05:26 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:18:10.290 03:05:26 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.290 03:05:26 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:18:10.290 03:05:26 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.290 03:05:26 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:10.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.290 --rc genhtml_branch_coverage=1 00:18:10.290 --rc genhtml_function_coverage=1 00:18:10.290 --rc genhtml_legend=1 00:18:10.290 --rc geninfo_all_blocks=1 00:18:10.290 --rc geninfo_unexecuted_blocks=1 00:18:10.290 00:18:10.290 ' 00:18:10.290 03:05:26 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:10.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.290 --rc genhtml_branch_coverage=1 00:18:10.290 --rc genhtml_function_coverage=1 00:18:10.290 --rc genhtml_legend=1 00:18:10.290 --rc geninfo_all_blocks=1 00:18:10.290 --rc geninfo_unexecuted_blocks=1 00:18:10.290 00:18:10.290 ' 00:18:10.290 03:05:26 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:10.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.290 --rc genhtml_branch_coverage=1 00:18:10.290 --rc genhtml_function_coverage=1 00:18:10.290 --rc genhtml_legend=1 00:18:10.290 --rc geninfo_all_blocks=1 00:18:10.290 --rc geninfo_unexecuted_blocks=1 00:18:10.290 00:18:10.290 ' 00:18:10.290 03:05:26 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:10.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.290 --rc genhtml_branch_coverage=1 00:18:10.290 --rc genhtml_function_coverage=1 00:18:10.290 --rc genhtml_legend=1 00:18:10.290 --rc geninfo_all_blocks=1 00:18:10.290 --rc geninfo_unexecuted_blocks=1 00:18:10.290 00:18:10.290 ' 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:10.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:10.290 03:05:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:10.291 03:05:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:10.291 03:05:26 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:10.291 03:05:26 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=87119 00:18:10.291 03:05:26 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 87119 00:18:10.291 03:05:26 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87119 ']' 00:18:10.291 03:05:26 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.291 03:05:26 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.291 03:05:26 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.291 03:05:26 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.291 03:05:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:10.291 03:05:26 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:10.291 [2024-11-29 03:05:26.182695] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:18:10.291 [2024-11-29 03:05:26.182821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87119 ] 00:18:10.551 [2024-11-29 03:05:26.327962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:10.551 [2024-11-29 03:05:26.349180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.551 [2024-11-29 03:05:26.349414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.551 [2024-11-29 03:05:26.349456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.120 03:05:27 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.120 03:05:27 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:11.120 03:05:27 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:11.120 03:05:27 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:11.120 03:05:27 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:11.120 03:05:27 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:11.120 03:05:27 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:11.120 03:05:27 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:11.381 03:05:27 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:11.381 03:05:27 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:11.381 03:05:27 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:11.381 03:05:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:11.381 03:05:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:11.381 03:05:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:11.381 03:05:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:11.381 03:05:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:11.641 03:05:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:11.641 { 00:18:11.641 "name": "nvme0n1", 00:18:11.641 "aliases": [ 00:18:11.641 "19959090-8689-4e92-b870-4b9cbf760a3c" 00:18:11.641 ], 00:18:11.641 "product_name": "NVMe disk", 00:18:11.641 "block_size": 4096, 00:18:11.641 "num_blocks": 1310720, 00:18:11.641 "uuid": "19959090-8689-4e92-b870-4b9cbf760a3c", 00:18:11.641 "numa_id": -1, 00:18:11.641 "assigned_rate_limits": { 00:18:11.641 "rw_ios_per_sec": 0, 00:18:11.641 "rw_mbytes_per_sec": 0, 00:18:11.641 "r_mbytes_per_sec": 0, 00:18:11.641 "w_mbytes_per_sec": 0 00:18:11.641 }, 00:18:11.641 "claimed": true, 00:18:11.641 "claim_type": "read_many_write_one", 00:18:11.641 "zoned": false, 00:18:11.641 "supported_io_types": { 00:18:11.641 "read": true, 00:18:11.641 "write": true, 00:18:11.641 "unmap": true, 00:18:11.641 "flush": true, 00:18:11.641 "reset": true, 00:18:11.641 "nvme_admin": true, 00:18:11.641 "nvme_io": true, 00:18:11.641 "nvme_io_md": false, 00:18:11.641 "write_zeroes": true, 00:18:11.641 "zcopy": false, 00:18:11.641 "get_zone_info": false, 00:18:11.641 "zone_management": false, 00:18:11.641 "zone_append": false, 00:18:11.641 "compare": true, 00:18:11.641 "compare_and_write": false, 00:18:11.641 "abort": true, 00:18:11.641 "seek_hole": false, 00:18:11.641 "seek_data": false, 00:18:11.641 "copy": true, 00:18:11.641 "nvme_iov_md": false 00:18:11.641 }, 00:18:11.641 "driver_specific": { 00:18:11.641 "nvme": [ 00:18:11.641 { 00:18:11.641 "pci_address": "0000:00:11.0", 00:18:11.641 "trid": { 00:18:11.641 "trtype": "PCIe", 00:18:11.641 "traddr": "0000:00:11.0" 00:18:11.641 }, 00:18:11.641 "ctrlr_data": { 00:18:11.641 "cntlid": 0, 00:18:11.641 "vendor_id": "0x1b36", 00:18:11.641 "model_number": "QEMU NVMe Ctrl", 00:18:11.641 "serial_number": "12341", 00:18:11.641 "firmware_revision": "8.0.0", 00:18:11.641 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:11.641 "oacs": { 00:18:11.641 "security": 0, 00:18:11.641 "format": 1, 00:18:11.641 "firmware": 0, 00:18:11.641 "ns_manage": 1 00:18:11.641 }, 00:18:11.641 "multi_ctrlr": false, 00:18:11.641 "ana_reporting": false 00:18:11.641 }, 00:18:11.641 "vs": { 00:18:11.641 "nvme_version": "1.4" 00:18:11.641 }, 00:18:11.641 "ns_data": { 00:18:11.641 "id": 1, 00:18:11.641 "can_share": false 00:18:11.641 } 00:18:11.641 } 00:18:11.641 ], 00:18:11.641 "mp_policy": "active_passive" 00:18:11.641 } 00:18:11.641 } 00:18:11.641 ]' 00:18:11.641 03:05:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:11.641 03:05:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:11.641 03:05:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:11.641 03:05:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:11.641 03:05:27 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:11.641 03:05:27 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:18:11.641 03:05:27 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:11.641 03:05:27 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:11.641 03:05:27 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:11.641 03:05:27 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:11.641 03:05:27 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:11.900 03:05:27 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=398b8569-26a7-4689-820d-9a6884aa90a3 00:18:11.900 03:05:27 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:11.900 03:05:27 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 398b8569-26a7-4689-820d-9a6884aa90a3 00:18:12.161 03:05:28 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:12.420 03:05:28 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=2119d3de-3c85-4291-867e-b9fae5b7ef89 00:18:12.420 03:05:28 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2119d3de-3c85-4291-867e-b9fae5b7ef89 00:18:12.681 03:05:28 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=72f8d974-9f36-4c9c-9696-6e6863be9c5e 00:18:12.681 03:05:28 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 72f8d974-9f36-4c9c-9696-6e6863be9c5e 00:18:12.681 03:05:28 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:12.681 03:05:28 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:12.681 03:05:28 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=72f8d974-9f36-4c9c-9696-6e6863be9c5e 00:18:12.681 03:05:28 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:12.681 03:05:28 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 72f8d974-9f36-4c9c-9696-6e6863be9c5e 00:18:12.681 03:05:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=72f8d974-9f36-4c9c-9696-6e6863be9c5e 00:18:12.681 03:05:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:12.681 03:05:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:12.681 03:05:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:12.681 03:05:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 72f8d974-9f36-4c9c-9696-6e6863be9c5e 00:18:12.943 03:05:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:12.943 { 00:18:12.943 "name": "72f8d974-9f36-4c9c-9696-6e6863be9c5e", 00:18:12.943 "aliases": [ 00:18:12.943 "lvs/nvme0n1p0" 00:18:12.943 ], 00:18:12.943 "product_name": "Logical Volume", 00:18:12.943 "block_size": 4096, 00:18:12.943 "num_blocks": 26476544, 00:18:12.944 "uuid": "72f8d974-9f36-4c9c-9696-6e6863be9c5e", 00:18:12.944 "assigned_rate_limits": { 00:18:12.944 "rw_ios_per_sec": 0, 00:18:12.944 "rw_mbytes_per_sec": 0, 00:18:12.944 "r_mbytes_per_sec": 0, 00:18:12.944 "w_mbytes_per_sec": 0 00:18:12.944 }, 00:18:12.944 "claimed": false, 00:18:12.944 "zoned": false, 00:18:12.944 "supported_io_types": { 00:18:12.944 "read": true, 00:18:12.944 "write": true, 00:18:12.944 "unmap": true, 00:18:12.944 "flush": false, 00:18:12.944 "reset": true, 00:18:12.944 "nvme_admin": false, 00:18:12.944 "nvme_io": false, 00:18:12.944 "nvme_io_md": false, 00:18:12.944 "write_zeroes": true, 00:18:12.944 "zcopy": false, 00:18:12.944 "get_zone_info": false, 00:18:12.944 "zone_management": false, 00:18:12.944 "zone_append": false, 00:18:12.944 "compare": false, 00:18:12.944 "compare_and_write": false, 00:18:12.944 "abort": false, 00:18:12.944 "seek_hole": true, 00:18:12.944 "seek_data": true, 00:18:12.944 "copy": false, 00:18:12.944 "nvme_iov_md": false 00:18:12.944 }, 00:18:12.944 "driver_specific": { 00:18:12.944 "lvol": { 00:18:12.944 "lvol_store_uuid": "2119d3de-3c85-4291-867e-b9fae5b7ef89", 00:18:12.944 "base_bdev": "nvme0n1", 00:18:12.944 "thin_provision": true, 00:18:12.944 "num_allocated_clusters": 0, 00:18:12.944 "snapshot": false, 00:18:12.944 "clone": false, 00:18:12.944 "esnap_clone": false 00:18:12.944 } 00:18:12.944 } 00:18:12.944 } 00:18:12.944 ]' 00:18:12.944 03:05:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:12.944 03:05:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:12.944 03:05:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:12.944 03:05:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:12.944 03:05:28 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:12.944 03:05:28 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:12.944 03:05:28 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:12.944 03:05:28 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:12.944 03:05:28 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:13.205 03:05:29 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:13.205 03:05:29 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:13.205 03:05:29 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 72f8d974-9f36-4c9c-9696-6e6863be9c5e 00:18:13.205 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=72f8d974-9f36-4c9c-9696-6e6863be9c5e 00:18:13.205 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:13.205 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:13.205 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:13.205 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 72f8d974-9f36-4c9c-9696-6e6863be9c5e 00:18:13.466 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:13.466 { 00:18:13.466 "name": "72f8d974-9f36-4c9c-9696-6e6863be9c5e", 00:18:13.466 "aliases": [ 00:18:13.466 "lvs/nvme0n1p0" 00:18:13.466 ], 00:18:13.466 "product_name": "Logical Volume", 00:18:13.466 "block_size": 4096, 00:18:13.466 "num_blocks": 26476544, 00:18:13.466 "uuid": "72f8d974-9f36-4c9c-9696-6e6863be9c5e", 00:18:13.466 "assigned_rate_limits": { 00:18:13.466 "rw_ios_per_sec": 0, 00:18:13.466 "rw_mbytes_per_sec": 0, 00:18:13.466 "r_mbytes_per_sec": 0, 00:18:13.466 "w_mbytes_per_sec": 0 00:18:13.466 }, 00:18:13.466 "claimed": false, 00:18:13.466 "zoned": false, 00:18:13.466 "supported_io_types": { 00:18:13.466 "read": true, 00:18:13.466 "write": true, 00:18:13.466 "unmap": true, 00:18:13.466 "flush": false, 00:18:13.466 "reset": true, 00:18:13.466 "nvme_admin": false, 00:18:13.466 "nvme_io": false, 00:18:13.466 "nvme_io_md": false, 00:18:13.466 "write_zeroes": true, 00:18:13.466 "zcopy": false, 00:18:13.466 "get_zone_info": false, 00:18:13.466 "zone_management": false, 00:18:13.466 "zone_append": false, 00:18:13.466 "compare": false, 00:18:13.466 "compare_and_write": false, 00:18:13.466 "abort": false, 00:18:13.466 "seek_hole": true, 00:18:13.466 "seek_data": true, 00:18:13.466 "copy": false, 00:18:13.466 "nvme_iov_md": false 00:18:13.466 }, 00:18:13.466 "driver_specific": { 00:18:13.466 "lvol": { 00:18:13.466 "lvol_store_uuid": "2119d3de-3c85-4291-867e-b9fae5b7ef89", 00:18:13.466 "base_bdev": "nvme0n1", 00:18:13.466 "thin_provision": true, 00:18:13.466 "num_allocated_clusters": 0, 00:18:13.466 "snapshot": false, 00:18:13.466 "clone": false, 00:18:13.466 "esnap_clone": false 00:18:13.466 } 00:18:13.466 } 00:18:13.466 } 00:18:13.466 ]' 00:18:13.466 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:13.466 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:13.466 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:13.466 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:13.466 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:13.467 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:13.467 03:05:29 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:13.467 03:05:29 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:13.726 03:05:29 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:13.726 03:05:29 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:13.726 03:05:29 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 72f8d974-9f36-4c9c-9696-6e6863be9c5e 00:18:13.726 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=72f8d974-9f36-4c9c-9696-6e6863be9c5e 00:18:13.726 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:13.726 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:13.726 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:13.726 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 72f8d974-9f36-4c9c-9696-6e6863be9c5e 00:18:13.984 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:13.984 { 00:18:13.984 "name": "72f8d974-9f36-4c9c-9696-6e6863be9c5e", 00:18:13.984 "aliases": [ 00:18:13.984 "lvs/nvme0n1p0" 00:18:13.984 ], 00:18:13.984 "product_name": "Logical Volume", 00:18:13.984 "block_size": 4096, 00:18:13.984 "num_blocks": 26476544, 00:18:13.984 "uuid": "72f8d974-9f36-4c9c-9696-6e6863be9c5e", 00:18:13.984 "assigned_rate_limits": { 00:18:13.984 "rw_ios_per_sec": 0, 00:18:13.984 "rw_mbytes_per_sec": 0, 00:18:13.984 "r_mbytes_per_sec": 0, 00:18:13.984 "w_mbytes_per_sec": 0 00:18:13.984 }, 00:18:13.984 "claimed": false, 00:18:13.984 "zoned": false, 00:18:13.984 "supported_io_types": { 00:18:13.984 "read": true, 00:18:13.984 "write": true, 00:18:13.984 "unmap": true, 00:18:13.984 "flush": false, 00:18:13.984 "reset": true, 00:18:13.984 "nvme_admin": false, 00:18:13.984 "nvme_io": false, 00:18:13.984 "nvme_io_md": false, 00:18:13.984 "write_zeroes": true, 00:18:13.984 "zcopy": false, 00:18:13.984 "get_zone_info": false, 00:18:13.984 "zone_management": false, 00:18:13.984 "zone_append": false, 00:18:13.984 "compare": false, 00:18:13.984 "compare_and_write": false, 00:18:13.984 "abort": false, 00:18:13.984 "seek_hole": true, 00:18:13.984 "seek_data": true, 00:18:13.984 "copy": false, 00:18:13.984 "nvme_iov_md": false 00:18:13.984 }, 00:18:13.984 "driver_specific": { 00:18:13.984 "lvol": { 00:18:13.984 "lvol_store_uuid": "2119d3de-3c85-4291-867e-b9fae5b7ef89", 00:18:13.984 "base_bdev": "nvme0n1", 00:18:13.984 "thin_provision": true, 00:18:13.984 "num_allocated_clusters": 0, 00:18:13.984 "snapshot": false, 00:18:13.984 "clone": false, 00:18:13.984 "esnap_clone": false 00:18:13.984 } 00:18:13.984 } 00:18:13.984 } 00:18:13.984 ]' 00:18:13.984 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:13.984 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:13.984 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:13.984 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:13.984 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:13.984 03:05:29 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:13.984 03:05:29 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:13.984 03:05:29 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 72f8d974-9f36-4c9c-9696-6e6863be9c5e -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:14.244 [2024-11-29 03:05:30.014611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.244 [2024-11-29 03:05:30.015017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:14.244 [2024-11-29 03:05:30.015075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:14.244 [2024-11-29 03:05:30.015119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.244 [2024-11-29 03:05:30.017115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.244 [2024-11-29 03:05:30.017278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:14.244 [2024-11-29 03:05:30.017353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.937 ms 00:18:14.244 [2024-11-29 03:05:30.017408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.244 [2024-11-29 03:05:30.017971] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:14.244 [2024-11-29 03:05:30.018425] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:14.244 [2024-11-29 03:05:30.018552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.244 [2024-11-29 03:05:30.018655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:14.244 [2024-11-29 03:05:30.018709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:18:14.244 [2024-11-29 03:05:30.018757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.244 [2024-11-29 03:05:30.018971] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d72ba7a5-0ce0-4bc6-a145-7b09d187e338 00:18:14.244 [2024-11-29 03:05:30.020060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.244 [2024-11-29 03:05:30.020182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:14.244 [2024-11-29 03:05:30.020269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:14.244 [2024-11-29 03:05:30.020363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.244 [2024-11-29 03:05:30.025592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.244 [2024-11-29 03:05:30.025727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:14.244 [2024-11-29 03:05:30.025811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.106 ms 00:18:14.244 [2024-11-29 03:05:30.025878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.244 [2024-11-29 03:05:30.026053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.244 [2024-11-29 03:05:30.026111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:14.244 [2024-11-29 03:05:30.026194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:14.244 [2024-11-29 03:05:30.026270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.244 [2024-11-29 03:05:30.026342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.244 [2024-11-29 03:05:30.026416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:14.244 [2024-11-29 03:05:30.026517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:14.244 [2024-11-29 03:05:30.026601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.244 [2024-11-29 03:05:30.026683] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:14.244 [2024-11-29 03:05:30.028079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.244 [2024-11-29 03:05:30.028199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:14.244 [2024-11-29 03:05:30.028283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.402 ms 00:18:14.244 [2024-11-29 03:05:30.028329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.244 [2024-11-29 03:05:30.028436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.244 [2024-11-29 03:05:30.028480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:14.244 [2024-11-29 03:05:30.028593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:14.244 [2024-11-29 03:05:30.028641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.244 [2024-11-29 03:05:30.028748] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:14.244 [2024-11-29 03:05:30.028918] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:14.244 [2024-11-29 03:05:30.029012] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:14.244 [2024-11-29 03:05:30.029095] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:14.244 [2024-11-29 03:05:30.029152] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:14.244 [2024-11-29 03:05:30.029237] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:14.244 [2024-11-29 03:05:30.029291] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:14.244 [2024-11-29 03:05:30.029334] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:14.244 [2024-11-29 03:05:30.029380] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:14.244 [2024-11-29 03:05:30.029425] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:14.245 [2024-11-29 03:05:30.029475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.245 [2024-11-29 03:05:30.029599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:14.245 [2024-11-29 03:05:30.029644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:18:14.245 [2024-11-29 03:05:30.029741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.245 [2024-11-29 03:05:30.029869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.245 [2024-11-29 03:05:30.029918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:14.245 [2024-11-29 03:05:30.030033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:14.245 [2024-11-29 03:05:30.030081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.245 [2024-11-29 03:05:30.030248] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:14.245 [2024-11-29 03:05:30.030335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:14.245 [2024-11-29 03:05:30.030412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:14.245 [2024-11-29 03:05:30.030450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.245 [2024-11-29 03:05:30.030479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:14.245 [2024-11-29 03:05:30.030554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:14.245 [2024-11-29 03:05:30.030591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:14.245 [2024-11-29 03:05:30.030625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:14.245 [2024-11-29 03:05:30.030655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:14.245 [2024-11-29 03:05:30.030686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:14.245 [2024-11-29 03:05:30.030753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:14.245 [2024-11-29 03:05:30.030794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:14.245 [2024-11-29 03:05:30.030848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:14.245 [2024-11-29 03:05:30.030884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:14.245 [2024-11-29 03:05:30.030949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:14.245 [2024-11-29 03:05:30.030992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.245 [2024-11-29 03:05:30.031020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:14.245 [2024-11-29 03:05:30.031056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:14.245 [2024-11-29 03:05:30.031121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.245 [2024-11-29 03:05:30.031158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:14.245 [2024-11-29 03:05:30.031192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:14.245 [2024-11-29 03:05:30.031268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.245 [2024-11-29 03:05:30.031305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:14.245 [2024-11-29 03:05:30.031336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:14.245 [2024-11-29 03:05:30.031396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.245 [2024-11-29 03:05:30.031432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:14.245 [2024-11-29 03:05:30.031461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:14.245 [2024-11-29 03:05:30.031497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.245 [2024-11-29 03:05:30.031557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:14.245 [2024-11-29 03:05:30.031602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:14.245 [2024-11-29 03:05:30.031632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.245 [2024-11-29 03:05:30.031661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:14.245 [2024-11-29 03:05:30.031731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:14.245 [2024-11-29 03:05:30.031773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:14.245 [2024-11-29 03:05:30.031804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:14.245 [2024-11-29 03:05:30.031848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:14.245 [2024-11-29 03:05:30.031918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:14.245 [2024-11-29 03:05:30.031957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:14.245 [2024-11-29 03:05:30.031990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:14.245 [2024-11-29 03:05:30.032025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.245 [2024-11-29 03:05:30.032091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:14.245 [2024-11-29 03:05:30.032125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:14.245 [2024-11-29 03:05:30.032157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.245 [2024-11-29 03:05:30.032189] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:14.245 [2024-11-29 03:05:30.032263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:14.245 [2024-11-29 03:05:30.032304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:14.245 [2024-11-29 03:05:30.032371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.245 [2024-11-29 03:05:30.032405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:14.245 [2024-11-29 03:05:30.032438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:14.245 [2024-11-29 03:05:30.032470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:14.245 [2024-11-29 03:05:30.032535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:14.245 [2024-11-29 03:05:30.032573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:14.245 [2024-11-29 03:05:30.032605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:14.245 [2024-11-29 03:05:30.032644] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:14.245 [2024-11-29 03:05:30.032711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:14.245 [2024-11-29 03:05:30.032754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:14.245 [2024-11-29 03:05:30.032788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:14.245 [2024-11-29 03:05:30.032868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:14.245 [2024-11-29 03:05:30.032910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:14.245 [2024-11-29 03:05:30.032943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:14.245 [2024-11-29 03:05:30.032976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:14.245 [2024-11-29 03:05:30.033014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:14.245 [2024-11-29 03:05:30.033096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:14.245 [2024-11-29 03:05:30.033139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:14.245 [2024-11-29 03:05:30.033175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:14.245 [2024-11-29 03:05:30.033205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:14.245 [2024-11-29 03:05:30.033235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:14.245 [2024-11-29 03:05:30.033308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:14.246 [2024-11-29 03:05:30.033348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:14.246 [2024-11-29 03:05:30.033381] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:14.246 [2024-11-29 03:05:30.033414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:14.246 [2024-11-29 03:05:30.033447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:14.246 [2024-11-29 03:05:30.033541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:14.246 [2024-11-29 03:05:30.033584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:14.246 [2024-11-29 03:05:30.033619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:14.246 [2024-11-29 03:05:30.033687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.246 [2024-11-29 03:05:30.033734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:14.246 [2024-11-29 03:05:30.033772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.486 ms 00:18:14.246 [2024-11-29 03:05:30.033804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.246 [2024-11-29 03:05:30.033962] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:14.246 [2024-11-29 03:05:30.034011] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:16.781 [2024-11-29 03:05:32.451894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.781 [2024-11-29 03:05:32.452369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:16.781 [2024-11-29 03:05:32.452450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2417.917 ms 00:18:16.781 [2024-11-29 03:05:32.452493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.781 [2024-11-29 03:05:32.460987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.781 [2024-11-29 03:05:32.461181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:16.781 [2024-11-29 03:05:32.461290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.329 ms 00:18:16.781 [2024-11-29 03:05:32.461335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.781 [2024-11-29 03:05:32.461514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.781 [2024-11-29 03:05:32.461663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:16.781 [2024-11-29 03:05:32.461766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:16.781 [2024-11-29 03:05:32.461849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.781 [2024-11-29 03:05:32.480420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.781 [2024-11-29 03:05:32.480679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:16.781 [2024-11-29 03:05:32.480774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.476 ms 00:18:16.781 [2024-11-29 03:05:32.480860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.781 [2024-11-29 03:05:32.481007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.781 [2024-11-29 03:05:32.481183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:16.781 [2024-11-29 03:05:32.481267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:16.781 [2024-11-29 03:05:32.481449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.781 [2024-11-29 03:05:32.481995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.781 [2024-11-29 03:05:32.482185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:16.781 [2024-11-29 03:05:32.482298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:18:16.781 [2024-11-29 03:05:32.482648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.781 [2024-11-29 03:05:32.482880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.781 [2024-11-29 03:05:32.483020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:16.781 [2024-11-29 03:05:32.483172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:18:16.781 [2024-11-29 03:05:32.483247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.781 [2024-11-29 03:05:32.489179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.781 [2024-11-29 03:05:32.489262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:16.781 [2024-11-29 03:05:32.489309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.775 ms 00:18:16.781 [2024-11-29 03:05:32.489354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.781 [2024-11-29 03:05:32.497824] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:16.781 [2024-11-29 03:05:32.512039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.781 [2024-11-29 03:05:32.512199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:16.781 [2024-11-29 03:05:32.512262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.471 ms 00:18:16.781 [2024-11-29 03:05:32.512305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.781 [2024-11-29 03:05:32.581862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.781 [2024-11-29 03:05:32.582204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:16.781 [2024-11-29 03:05:32.582281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.426 ms 00:18:16.781 [2024-11-29 03:05:32.582337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.781 [2024-11-29 03:05:32.582570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.781 [2024-11-29 03:05:32.582687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:16.781 [2024-11-29 03:05:32.582762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:18:16.781 [2024-11-29 03:05:32.582817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.781 [2024-11-29 03:05:32.586749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.781 [2024-11-29 03:05:32.586928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:16.781 [2024-11-29 03:05:32.586985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.767 ms 00:18:16.781 [2024-11-29 03:05:32.587026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.781 [2024-11-29 03:05:32.590518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.781 [2024-11-29 03:05:32.590675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:16.781 [2024-11-29 03:05:32.590881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.391 ms 00:18:16.781 [2024-11-29 03:05:32.591241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.781 [2024-11-29 03:05:32.591713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.781 [2024-11-29 03:05:32.591867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:16.781 [2024-11-29 03:05:32.592235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:18:16.781 [2024-11-29 03:05:32.592301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.781 [2024-11-29 03:05:32.624556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.781 [2024-11-29 03:05:32.624751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:16.781 [2024-11-29 03:05:32.624814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.170 ms 00:18:16.782 [2024-11-29 03:05:32.624886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.782 [2024-11-29 03:05:32.629866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.782 [2024-11-29 03:05:32.630022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:16.782 [2024-11-29 03:05:32.630469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.857 ms 00:18:16.782 [2024-11-29 03:05:32.630615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.782 [2024-11-29 03:05:32.634546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.782 [2024-11-29 03:05:32.634704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:16.782 [2024-11-29 03:05:32.634811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.766 ms 00:18:16.782 [2024-11-29 03:05:32.634898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.782 [2024-11-29 03:05:32.639525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.782 [2024-11-29 03:05:32.639677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:16.782 [2024-11-29 03:05:32.639776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.452 ms 00:18:16.782 [2024-11-29 03:05:32.639854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.782 [2024-11-29 03:05:32.640105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.782 [2024-11-29 03:05:32.640213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:16.782 [2024-11-29 03:05:32.640380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:16.782 [2024-11-29 03:05:32.640477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.782 [2024-11-29 03:05:32.640666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.782 [2024-11-29 03:05:32.640770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:16.782 [2024-11-29 03:05:32.640876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:16.782 [2024-11-29 03:05:32.640941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.782 [2024-11-29 03:05:32.641983] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:16.782 [2024-11-29 03:05:32.643156] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2627.001 ms, result 0 00:18:16.782 [2024-11-29 03:05:32.644269] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:16.782 { 00:18:16.782 "name": "ftl0", 00:18:16.782 "uuid": "d72ba7a5-0ce0-4bc6-a145-7b09d187e338" 00:18:16.782 } 00:18:16.782 03:05:32 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:16.782 03:05:32 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:16.782 03:05:32 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:16.782 03:05:32 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:18:16.782 03:05:32 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:16.782 03:05:32 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:16.782 03:05:32 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:17.043 03:05:32 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:17.304 [ 00:18:17.304 { 00:18:17.304 "name": "ftl0", 00:18:17.304 "aliases": [ 00:18:17.304 "d72ba7a5-0ce0-4bc6-a145-7b09d187e338" 00:18:17.304 ], 00:18:17.304 "product_name": "FTL disk", 00:18:17.304 "block_size": 4096, 00:18:17.304 "num_blocks": 23592960, 00:18:17.304 "uuid": "d72ba7a5-0ce0-4bc6-a145-7b09d187e338", 00:18:17.304 "assigned_rate_limits": { 00:18:17.304 "rw_ios_per_sec": 0, 00:18:17.305 "rw_mbytes_per_sec": 0, 00:18:17.305 "r_mbytes_per_sec": 0, 00:18:17.305 "w_mbytes_per_sec": 0 00:18:17.305 }, 00:18:17.305 "claimed": false, 00:18:17.305 "zoned": false, 00:18:17.305 "supported_io_types": { 00:18:17.305 "read": true, 00:18:17.305 "write": true, 00:18:17.305 "unmap": true, 00:18:17.305 "flush": true, 00:18:17.305 "reset": false, 00:18:17.305 "nvme_admin": false, 00:18:17.305 "nvme_io": false, 00:18:17.305 "nvme_io_md": false, 00:18:17.305 "write_zeroes": true, 00:18:17.305 "zcopy": false, 00:18:17.305 "get_zone_info": false, 00:18:17.305 "zone_management": false, 00:18:17.305 "zone_append": false, 00:18:17.305 "compare": false, 00:18:17.305 "compare_and_write": false, 00:18:17.305 "abort": false, 00:18:17.305 "seek_hole": false, 00:18:17.305 "seek_data": false, 00:18:17.305 "copy": false, 00:18:17.305 "nvme_iov_md": false 00:18:17.305 }, 00:18:17.305 "driver_specific": { 00:18:17.305 "ftl": { 00:18:17.305 "base_bdev": "72f8d974-9f36-4c9c-9696-6e6863be9c5e", 00:18:17.305 "cache": "nvc0n1p0" 00:18:17.305 } 00:18:17.305 } 00:18:17.305 } 00:18:17.305 ] 00:18:17.305 03:05:33 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:18:17.305 03:05:33 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:17.305 03:05:33 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:17.305 03:05:33 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:17.305 03:05:33 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:17.566 03:05:33 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:17.566 { 00:18:17.566 "name": "ftl0", 00:18:17.566 "aliases": [ 00:18:17.566 "d72ba7a5-0ce0-4bc6-a145-7b09d187e338" 00:18:17.566 ], 00:18:17.566 "product_name": "FTL disk", 00:18:17.566 "block_size": 4096, 00:18:17.566 "num_blocks": 23592960, 00:18:17.566 "uuid": "d72ba7a5-0ce0-4bc6-a145-7b09d187e338", 00:18:17.566 "assigned_rate_limits": { 00:18:17.566 "rw_ios_per_sec": 0, 00:18:17.566 "rw_mbytes_per_sec": 0, 00:18:17.566 "r_mbytes_per_sec": 0, 00:18:17.566 "w_mbytes_per_sec": 0 00:18:17.566 }, 00:18:17.566 "claimed": false, 00:18:17.566 "zoned": false, 00:18:17.566 "supported_io_types": { 00:18:17.566 "read": true, 00:18:17.566 "write": true, 00:18:17.566 "unmap": true, 00:18:17.566 "flush": true, 00:18:17.566 "reset": false, 00:18:17.566 "nvme_admin": false, 00:18:17.566 "nvme_io": false, 00:18:17.566 "nvme_io_md": false, 00:18:17.566 "write_zeroes": true, 00:18:17.566 "zcopy": false, 00:18:17.566 "get_zone_info": false, 00:18:17.566 "zone_management": false, 00:18:17.566 "zone_append": false, 00:18:17.566 "compare": false, 00:18:17.566 "compare_and_write": false, 00:18:17.566 "abort": false, 00:18:17.566 "seek_hole": false, 00:18:17.566 "seek_data": false, 00:18:17.566 "copy": false, 00:18:17.566 "nvme_iov_md": false 00:18:17.566 }, 00:18:17.566 "driver_specific": { 00:18:17.566 "ftl": { 00:18:17.566 "base_bdev": "72f8d974-9f36-4c9c-9696-6e6863be9c5e", 00:18:17.566 "cache": "nvc0n1p0" 00:18:17.566 } 00:18:17.566 } 00:18:17.566 } 00:18:17.566 ]' 00:18:17.566 03:05:33 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:17.566 03:05:33 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:17.566 03:05:33 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:17.830 [2024-11-29 03:05:33.697058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.830 [2024-11-29 03:05:33.697395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:17.830 [2024-11-29 03:05:33.697503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:17.830 [2024-11-29 03:05:33.697553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.830 [2024-11-29 03:05:33.697638] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:17.830 [2024-11-29 03:05:33.698177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.830 [2024-11-29 03:05:33.698254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:17.830 [2024-11-29 03:05:33.698389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:18:17.830 [2024-11-29 03:05:33.698441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.830 [2024-11-29 03:05:33.699201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.830 [2024-11-29 03:05:33.699252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:17.830 [2024-11-29 03:05:33.699304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:18:17.830 [2024-11-29 03:05:33.699351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.830 [2024-11-29 03:05:33.703029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.830 [2024-11-29 03:05:33.703169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:17.830 [2024-11-29 03:05:33.703215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.620 ms 00:18:17.830 [2024-11-29 03:05:33.703259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.830 [2024-11-29 03:05:33.710246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.830 [2024-11-29 03:05:33.710401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:17.830 [2024-11-29 03:05:33.710447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.893 ms 00:18:17.830 [2024-11-29 03:05:33.710497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.830 [2024-11-29 03:05:33.712533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.830 [2024-11-29 03:05:33.712681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:17.830 [2024-11-29 03:05:33.712743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.893 ms 00:18:17.830 [2024-11-29 03:05:33.712785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.830 [2024-11-29 03:05:33.717391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.830 [2024-11-29 03:05:33.717544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:17.830 [2024-11-29 03:05:33.717678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.526 ms 00:18:17.830 [2024-11-29 03:05:33.717741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.830 [2024-11-29 03:05:33.718136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.830 [2024-11-29 03:05:33.718255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:17.830 [2024-11-29 03:05:33.718338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:18:17.830 [2024-11-29 03:05:33.718395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.830 [2024-11-29 03:05:33.720246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.830 [2024-11-29 03:05:33.720396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:17.830 [2024-11-29 03:05:33.720536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.740 ms 00:18:17.830 [2024-11-29 03:05:33.720594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.830 [2024-11-29 03:05:33.722464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.830 [2024-11-29 03:05:33.722616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:17.830 [2024-11-29 03:05:33.722782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.672 ms 00:18:17.830 [2024-11-29 03:05:33.722857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.830 [2024-11-29 03:05:33.724752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.830 [2024-11-29 03:05:33.724910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:17.830 [2024-11-29 03:05:33.725050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.658 ms 00:18:17.830 [2024-11-29 03:05:33.725100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.830 [2024-11-29 03:05:33.726454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.830 [2024-11-29 03:05:33.726590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:17.830 [2024-11-29 03:05:33.726689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:18:17.830 [2024-11-29 03:05:33.726751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.830 [2024-11-29 03:05:33.726937] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:17.830 [2024-11-29 03:05:33.727055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:17.830 [2024-11-29 03:05:33.727160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:17.830 [2024-11-29 03:05:33.727259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:17.830 [2024-11-29 03:05:33.727374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:17.830 [2024-11-29 03:05:33.727441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:17.830 [2024-11-29 03:05:33.727582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:17.830 [2024-11-29 03:05:33.727688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:17.830 [2024-11-29 03:05:33.727783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:17.830 [2024-11-29 03:05:33.727837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:17.830 [2024-11-29 03:05:33.727922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:17.830 [2024-11-29 03:05:33.727971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:17.830 [2024-11-29 03:05:33.728008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:17.830 [2024-11-29 03:05:33.728047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:17.830 [2024-11-29 03:05:33.728130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.728948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.729963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.730956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:17.831 [2024-11-29 03:05:33.731382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:17.832 [2024-11-29 03:05:33.731525] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:17.832 [2024-11-29 03:05:33.731534] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d72ba7a5-0ce0-4bc6-a145-7b09d187e338 00:18:17.832 [2024-11-29 03:05:33.731543] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:17.832 [2024-11-29 03:05:33.731553] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:17.832 [2024-11-29 03:05:33.731568] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:17.832 [2024-11-29 03:05:33.731576] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:17.832 [2024-11-29 03:05:33.731584] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:17.832 [2024-11-29 03:05:33.731592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:17.832 [2024-11-29 03:05:33.731600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:17.832 [2024-11-29 03:05:33.731607] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:17.832 [2024-11-29 03:05:33.731614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:17.832 [2024-11-29 03:05:33.731622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.832 [2024-11-29 03:05:33.731631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:17.832 [2024-11-29 03:05:33.731639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.686 ms 00:18:17.832 [2024-11-29 03:05:33.731660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.832 [2024-11-29 03:05:33.734300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.832 [2024-11-29 03:05:33.734444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:17.832 [2024-11-29 03:05:33.734561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:18:17.832 [2024-11-29 03:05:33.734624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.832 [2024-11-29 03:05:33.734778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.832 [2024-11-29 03:05:33.734900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:17.832 [2024-11-29 03:05:33.734991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:17.832 [2024-11-29 03:05:33.735046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.832 [2024-11-29 03:05:33.740318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.832 [2024-11-29 03:05:33.740477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:17.832 [2024-11-29 03:05:33.740569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.832 [2024-11-29 03:05:33.740627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.832 [2024-11-29 03:05:33.740761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.832 [2024-11-29 03:05:33.740817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:17.832 [2024-11-29 03:05:33.740890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.832 [2024-11-29 03:05:33.740945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.832 [2024-11-29 03:05:33.741060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.832 [2024-11-29 03:05:33.741124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:17.832 [2024-11-29 03:05:33.741191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.832 [2024-11-29 03:05:33.741315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.832 [2024-11-29 03:05:33.741438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.832 [2024-11-29 03:05:33.741551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:17.832 [2024-11-29 03:05:33.741678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.832 [2024-11-29 03:05:33.741774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.832 [2024-11-29 03:05:33.750941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.832 [2024-11-29 03:05:33.751124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:17.832 [2024-11-29 03:05:33.751263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.832 [2024-11-29 03:05:33.751315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.832 [2024-11-29 03:05:33.759078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.832 [2024-11-29 03:05:33.759269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.832 [2024-11-29 03:05:33.759370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.832 [2024-11-29 03:05:33.759429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.832 [2024-11-29 03:05:33.759599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.832 [2024-11-29 03:05:33.759698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:17.832 [2024-11-29 03:05:33.759795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.832 [2024-11-29 03:05:33.759866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.832 [2024-11-29 03:05:33.760102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.832 [2024-11-29 03:05:33.760147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:17.832 [2024-11-29 03:05:33.760186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.832 [2024-11-29 03:05:33.760218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.832 [2024-11-29 03:05:33.760356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.832 [2024-11-29 03:05:33.760467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:17.832 [2024-11-29 03:05:33.760527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.832 [2024-11-29 03:05:33.760579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.832 [2024-11-29 03:05:33.760875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.832 [2024-11-29 03:05:33.760928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:17.832 [2024-11-29 03:05:33.760967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.832 [2024-11-29 03:05:33.761009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.832 [2024-11-29 03:05:33.761084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.832 [2024-11-29 03:05:33.761119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:17.832 [2024-11-29 03:05:33.761166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.832 [2024-11-29 03:05:33.761199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.832 [2024-11-29 03:05:33.761274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.832 [2024-11-29 03:05:33.761393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:17.832 [2024-11-29 03:05:33.761452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.832 [2024-11-29 03:05:33.761505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.832 [2024-11-29 03:05:33.761762] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.665 ms, result 0 00:18:17.832 true 00:18:17.832 03:05:33 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 87119 00:18:17.832 03:05:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87119 ']' 00:18:17.832 03:05:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87119 00:18:17.832 03:05:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:18:17.833 03:05:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.833 03:05:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87119 00:18:17.833 killing process with pid 87119 00:18:17.833 03:05:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.833 03:05:33 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.833 03:05:33 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87119' 00:18:17.833 03:05:33 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87119 00:18:17.833 03:05:33 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87119 00:18:23.116 03:05:38 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:23.731 65536+0 records in 00:18:23.731 65536+0 records out 00:18:23.731 268435456 bytes (268 MB, 256 MiB) copied, 1.10011 s, 244 MB/s 00:18:23.731 03:05:39 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:23.731 [2024-11-29 03:05:39.710260] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:18:24.006 [2024-11-29 03:05:39.711083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87291 ] 00:18:24.006 [2024-11-29 03:05:39.851198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.006 [2024-11-29 03:05:39.871020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.006 [2024-11-29 03:05:39.967747] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:24.006 [2024-11-29 03:05:39.968036] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:24.269 [2024-11-29 03:05:40.124587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.269 [2024-11-29 03:05:40.124794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:24.269 [2024-11-29 03:05:40.125100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:24.269 [2024-11-29 03:05:40.125141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.269 [2024-11-29 03:05:40.127607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.269 [2024-11-29 03:05:40.127774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:24.269 [2024-11-29 03:05:40.127856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.411 ms 00:18:24.269 [2024-11-29 03:05:40.127882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.269 [2024-11-29 03:05:40.127993] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:24.269 [2024-11-29 03:05:40.128295] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:24.269 [2024-11-29 03:05:40.128349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.269 [2024-11-29 03:05:40.128374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:24.269 [2024-11-29 03:05:40.128770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:18:24.269 [2024-11-29 03:05:40.128821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.269 [2024-11-29 03:05:40.130510] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:24.269 [2024-11-29 03:05:40.134097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.269 [2024-11-29 03:05:40.134243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:24.269 [2024-11-29 03:05:40.134305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.590 ms 00:18:24.269 [2024-11-29 03:05:40.134328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.269 [2024-11-29 03:05:40.134425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.269 [2024-11-29 03:05:40.134459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:24.269 [2024-11-29 03:05:40.134479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:24.269 [2024-11-29 03:05:40.134499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.269 [2024-11-29 03:05:40.141623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.269 [2024-11-29 03:05:40.141763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:24.269 [2024-11-29 03:05:40.141815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.064 ms 00:18:24.269 [2024-11-29 03:05:40.141860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.269 [2024-11-29 03:05:40.142007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.269 [2024-11-29 03:05:40.142037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:24.269 [2024-11-29 03:05:40.142117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:24.269 [2024-11-29 03:05:40.142147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.269 [2024-11-29 03:05:40.142192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.269 [2024-11-29 03:05:40.142214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:24.269 [2024-11-29 03:05:40.142235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:24.269 [2024-11-29 03:05:40.142395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.269 [2024-11-29 03:05:40.142475] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:24.269 [2024-11-29 03:05:40.144359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.269 [2024-11-29 03:05:40.144396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:24.269 [2024-11-29 03:05:40.144406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.892 ms 00:18:24.269 [2024-11-29 03:05:40.144418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.269 [2024-11-29 03:05:40.144462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.269 [2024-11-29 03:05:40.144471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:24.269 [2024-11-29 03:05:40.144479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:24.269 [2024-11-29 03:05:40.144491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.269 [2024-11-29 03:05:40.144509] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:24.269 [2024-11-29 03:05:40.144529] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:24.269 [2024-11-29 03:05:40.144570] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:24.269 [2024-11-29 03:05:40.144589] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:24.269 [2024-11-29 03:05:40.144695] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:24.269 [2024-11-29 03:05:40.144707] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:24.269 [2024-11-29 03:05:40.144718] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:24.269 [2024-11-29 03:05:40.144729] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:24.269 [2024-11-29 03:05:40.144738] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:24.269 [2024-11-29 03:05:40.144747] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:24.269 [2024-11-29 03:05:40.144755] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:24.269 [2024-11-29 03:05:40.144763] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:24.269 [2024-11-29 03:05:40.144774] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:24.269 [2024-11-29 03:05:40.144784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.269 [2024-11-29 03:05:40.144792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:24.269 [2024-11-29 03:05:40.144799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:18:24.269 [2024-11-29 03:05:40.144808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.269 [2024-11-29 03:05:40.145041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.269 [2024-11-29 03:05:40.145078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:24.269 [2024-11-29 03:05:40.145098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:24.269 [2024-11-29 03:05:40.145118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.269 [2024-11-29 03:05:40.145239] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:24.269 [2024-11-29 03:05:40.145344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:24.269 [2024-11-29 03:05:40.145368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.269 [2024-11-29 03:05:40.145394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.269 [2024-11-29 03:05:40.145414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:24.269 [2024-11-29 03:05:40.145433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:24.269 [2024-11-29 03:05:40.145452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:24.269 [2024-11-29 03:05:40.145488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:24.269 [2024-11-29 03:05:40.145509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:24.269 [2024-11-29 03:05:40.145528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.269 [2024-11-29 03:05:40.145546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:24.269 [2024-11-29 03:05:40.145565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:24.269 [2024-11-29 03:05:40.145634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.269 [2024-11-29 03:05:40.145658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:24.269 [2024-11-29 03:05:40.145727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:24.269 [2024-11-29 03:05:40.145738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.269 [2024-11-29 03:05:40.145745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:24.269 [2024-11-29 03:05:40.145753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:24.269 [2024-11-29 03:05:40.145761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.269 [2024-11-29 03:05:40.145768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:24.269 [2024-11-29 03:05:40.145775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:24.269 [2024-11-29 03:05:40.145782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.269 [2024-11-29 03:05:40.145789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:24.269 [2024-11-29 03:05:40.145803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:24.269 [2024-11-29 03:05:40.145810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.269 [2024-11-29 03:05:40.145817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:24.269 [2024-11-29 03:05:40.145839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:24.269 [2024-11-29 03:05:40.145849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.269 [2024-11-29 03:05:40.145856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:24.269 [2024-11-29 03:05:40.145863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:24.269 [2024-11-29 03:05:40.145871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.269 [2024-11-29 03:05:40.145878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:24.269 [2024-11-29 03:05:40.145885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:24.269 [2024-11-29 03:05:40.145891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.269 [2024-11-29 03:05:40.145898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:24.269 [2024-11-29 03:05:40.145904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:24.270 [2024-11-29 03:05:40.145911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.270 [2024-11-29 03:05:40.145918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:24.270 [2024-11-29 03:05:40.145925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:24.270 [2024-11-29 03:05:40.145936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.270 [2024-11-29 03:05:40.145943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:24.270 [2024-11-29 03:05:40.145950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:24.270 [2024-11-29 03:05:40.145957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.270 [2024-11-29 03:05:40.145965] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:24.270 [2024-11-29 03:05:40.145973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:24.270 [2024-11-29 03:05:40.145980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.270 [2024-11-29 03:05:40.145987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.270 [2024-11-29 03:05:40.145995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:24.270 [2024-11-29 03:05:40.146002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:24.270 [2024-11-29 03:05:40.146008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:24.270 [2024-11-29 03:05:40.146016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:24.270 [2024-11-29 03:05:40.146023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:24.270 [2024-11-29 03:05:40.146029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:24.270 [2024-11-29 03:05:40.146038] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:24.270 [2024-11-29 03:05:40.146049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.270 [2024-11-29 03:05:40.146059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:24.270 [2024-11-29 03:05:40.146066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:24.270 [2024-11-29 03:05:40.146073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:24.270 [2024-11-29 03:05:40.146081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:24.270 [2024-11-29 03:05:40.146089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:24.270 [2024-11-29 03:05:40.146097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:24.270 [2024-11-29 03:05:40.146104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:24.270 [2024-11-29 03:05:40.146119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:24.270 [2024-11-29 03:05:40.146126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:24.270 [2024-11-29 03:05:40.146133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:24.270 [2024-11-29 03:05:40.146140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:24.270 [2024-11-29 03:05:40.146148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:24.270 [2024-11-29 03:05:40.146155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:24.270 [2024-11-29 03:05:40.146163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:24.270 [2024-11-29 03:05:40.146170] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:24.270 [2024-11-29 03:05:40.146183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.270 [2024-11-29 03:05:40.146193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:24.270 [2024-11-29 03:05:40.146202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:24.270 [2024-11-29 03:05:40.146209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:24.270 [2024-11-29 03:05:40.146216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:24.270 [2024-11-29 03:05:40.146224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.270 [2024-11-29 03:05:40.146232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:24.270 [2024-11-29 03:05:40.146241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:18:24.270 [2024-11-29 03:05:40.146249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.270 [2024-11-29 03:05:40.158817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.270 [2024-11-29 03:05:40.158871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:24.270 [2024-11-29 03:05:40.158882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.489 ms 00:18:24.270 [2024-11-29 03:05:40.158890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.270 [2024-11-29 03:05:40.159015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.270 [2024-11-29 03:05:40.159033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:24.270 [2024-11-29 03:05:40.159041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:24.270 [2024-11-29 03:05:40.159054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.270 [2024-11-29 03:05:40.182417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.270 [2024-11-29 03:05:40.182491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:24.270 [2024-11-29 03:05:40.182513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.339 ms 00:18:24.270 [2024-11-29 03:05:40.182529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.270 [2024-11-29 03:05:40.182675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.270 [2024-11-29 03:05:40.182696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:24.270 [2024-11-29 03:05:40.182712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:24.270 [2024-11-29 03:05:40.182735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.270 [2024-11-29 03:05:40.183300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.270 [2024-11-29 03:05:40.183345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:24.270 [2024-11-29 03:05:40.183365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:18:24.270 [2024-11-29 03:05:40.183383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.270 [2024-11-29 03:05:40.183616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.270 [2024-11-29 03:05:40.183650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:24.270 [2024-11-29 03:05:40.183665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:18:24.270 [2024-11-29 03:05:40.183680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.270 [2024-11-29 03:05:40.192227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.270 [2024-11-29 03:05:40.192270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:24.270 [2024-11-29 03:05:40.192287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.510 ms 00:18:24.270 [2024-11-29 03:05:40.192296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.270 [2024-11-29 03:05:40.195725] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:24.270 [2024-11-29 03:05:40.195776] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:24.270 [2024-11-29 03:05:40.195789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.270 [2024-11-29 03:05:40.195797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:24.270 [2024-11-29 03:05:40.195807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.385 ms 00:18:24.270 [2024-11-29 03:05:40.195815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.270 [2024-11-29 03:05:40.219025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.270 [2024-11-29 03:05:40.219073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:24.270 [2024-11-29 03:05:40.219085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.126 ms 00:18:24.270 [2024-11-29 03:05:40.219093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.270 [2024-11-29 03:05:40.221068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.270 [2024-11-29 03:05:40.221213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:24.270 [2024-11-29 03:05:40.221229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.870 ms 00:18:24.270 [2024-11-29 03:05:40.221236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.270 [2024-11-29 03:05:40.222952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.270 [2024-11-29 03:05:40.222982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:24.270 [2024-11-29 03:05:40.222990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:18:24.270 [2024-11-29 03:05:40.222998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.270 [2024-11-29 03:05:40.223304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.270 [2024-11-29 03:05:40.223324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:24.270 [2024-11-29 03:05:40.223333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:18:24.270 [2024-11-29 03:05:40.223340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.270 [2024-11-29 03:05:40.239079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.270 [2024-11-29 03:05:40.239119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:24.270 [2024-11-29 03:05:40.239130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.707 ms 00:18:24.270 [2024-11-29 03:05:40.239138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.270 [2024-11-29 03:05:40.246507] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:24.533 [2024-11-29 03:05:40.260656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.533 [2024-11-29 03:05:40.260836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:24.533 [2024-11-29 03:05:40.260861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.464 ms 00:18:24.533 [2024-11-29 03:05:40.260872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.533 [2024-11-29 03:05:40.260943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.533 [2024-11-29 03:05:40.260954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:24.533 [2024-11-29 03:05:40.260962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:24.533 [2024-11-29 03:05:40.260973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.533 [2024-11-29 03:05:40.261017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.533 [2024-11-29 03:05:40.261029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:24.533 [2024-11-29 03:05:40.261037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:24.533 [2024-11-29 03:05:40.261044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.533 [2024-11-29 03:05:40.261067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.533 [2024-11-29 03:05:40.261075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:24.533 [2024-11-29 03:05:40.261083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:24.533 [2024-11-29 03:05:40.261090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.533 [2024-11-29 03:05:40.261122] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:24.533 [2024-11-29 03:05:40.261132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.533 [2024-11-29 03:05:40.261139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:24.533 [2024-11-29 03:05:40.261146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:24.533 [2024-11-29 03:05:40.261155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.533 [2024-11-29 03:05:40.265751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.533 [2024-11-29 03:05:40.265793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:24.533 [2024-11-29 03:05:40.265806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.572 ms 00:18:24.533 [2024-11-29 03:05:40.265814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.533 [2024-11-29 03:05:40.265919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.533 [2024-11-29 03:05:40.265932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:24.533 [2024-11-29 03:05:40.265941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:24.533 [2024-11-29 03:05:40.265949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.533 [2024-11-29 03:05:40.267087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:24.533 [2024-11-29 03:05:40.268112] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 142.220 ms, result 0 00:18:24.533 [2024-11-29 03:05:40.269248] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:24.533 [2024-11-29 03:05:40.277307] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:25.477  [2024-11-29T03:05:42.438Z] Copying: 14/256 [MB] (14 MBps) [2024-11-29T03:05:43.384Z] Copying: 27/256 [MB] (13 MBps) [2024-11-29T03:05:44.330Z] Copying: 39/256 [MB] (11 MBps) [2024-11-29T03:05:45.281Z] Copying: 51/256 [MB] (12 MBps) [2024-11-29T03:05:46.670Z] Copying: 65/256 [MB] (13 MBps) [2024-11-29T03:05:47.615Z] Copying: 81/256 [MB] (16 MBps) [2024-11-29T03:05:48.556Z] Copying: 96/256 [MB] (15 MBps) [2024-11-29T03:05:49.502Z] Copying: 108/256 [MB] (11 MBps) [2024-11-29T03:05:50.449Z] Copying: 121/256 [MB] (12 MBps) [2024-11-29T03:05:51.392Z] Copying: 134/256 [MB] (13 MBps) [2024-11-29T03:05:52.337Z] Copying: 153/256 [MB] (18 MBps) [2024-11-29T03:05:53.283Z] Copying: 165/256 [MB] (12 MBps) [2024-11-29T03:05:54.674Z] Copying: 176/256 [MB] (10 MBps) [2024-11-29T03:05:55.620Z] Copying: 186/256 [MB] (10 MBps) [2024-11-29T03:05:56.561Z] Copying: 196/256 [MB] (10 MBps) [2024-11-29T03:05:57.507Z] Copying: 206/256 [MB] (10 MBps) [2024-11-29T03:05:58.453Z] Copying: 221520/262144 [kB] (10128 kBps) [2024-11-29T03:05:59.397Z] Copying: 231664/262144 [kB] (10144 kBps) [2024-11-29T03:06:00.340Z] Copying: 236/256 [MB] (10 MBps) [2024-11-29T03:06:00.603Z] Copying: 252/256 [MB] (16 MBps) [2024-11-29T03:06:00.603Z] Copying: 256/256 [MB] (average 12 MBps)[2024-11-29 03:06:00.566342] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:44.611 [2024-11-29 03:06:00.567695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.611 [2024-11-29 03:06:00.567735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:44.611 [2024-11-29 03:06:00.567747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:44.611 [2024-11-29 03:06:00.567754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.611 [2024-11-29 03:06:00.567771] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:44.611 [2024-11-29 03:06:00.568313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.611 [2024-11-29 03:06:00.568346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:44.611 [2024-11-29 03:06:00.568354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:18:44.611 [2024-11-29 03:06:00.568361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.611 [2024-11-29 03:06:00.570347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.611 [2024-11-29 03:06:00.570376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:44.611 [2024-11-29 03:06:00.570384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.956 ms 00:18:44.611 [2024-11-29 03:06:00.570395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.611 [2024-11-29 03:06:00.577062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.611 [2024-11-29 03:06:00.577179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:44.611 [2024-11-29 03:06:00.577193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.653 ms 00:18:44.611 [2024-11-29 03:06:00.577205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.611 [2024-11-29 03:06:00.582435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.611 [2024-11-29 03:06:00.582459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:44.611 [2024-11-29 03:06:00.582467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.202 ms 00:18:44.611 [2024-11-29 03:06:00.582480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.611 [2024-11-29 03:06:00.584084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.611 [2024-11-29 03:06:00.584185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:44.611 [2024-11-29 03:06:00.584197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.557 ms 00:18:44.611 [2024-11-29 03:06:00.584202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.611 [2024-11-29 03:06:00.588323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.611 [2024-11-29 03:06:00.588357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:44.611 [2024-11-29 03:06:00.588365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.094 ms 00:18:44.611 [2024-11-29 03:06:00.588371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.611 [2024-11-29 03:06:00.588465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.611 [2024-11-29 03:06:00.588473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:44.611 [2024-11-29 03:06:00.588480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:44.611 [2024-11-29 03:06:00.588489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.611 [2024-11-29 03:06:00.590991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.611 [2024-11-29 03:06:00.591094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:44.611 [2024-11-29 03:06:00.591105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.488 ms 00:18:44.611 [2024-11-29 03:06:00.591111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.611 [2024-11-29 03:06:00.592708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.611 [2024-11-29 03:06:00.592735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:44.611 [2024-11-29 03:06:00.592742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:18:44.611 [2024-11-29 03:06:00.592747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.611 [2024-11-29 03:06:00.594197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.611 [2024-11-29 03:06:00.594295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:44.611 [2024-11-29 03:06:00.594306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:18:44.611 [2024-11-29 03:06:00.594312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.611 [2024-11-29 03:06:00.595461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.611 [2024-11-29 03:06:00.595489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:44.611 [2024-11-29 03:06:00.595495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:18:44.611 [2024-11-29 03:06:00.595500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.611 [2024-11-29 03:06:00.595524] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:44.611 [2024-11-29 03:06:00.595536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:44.611 [2024-11-29 03:06:00.595544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:44.611 [2024-11-29 03:06:00.595551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:44.611 [2024-11-29 03:06:00.595556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:44.611 [2024-11-29 03:06:00.595562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:44.611 [2024-11-29 03:06:00.595568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:44.611 [2024-11-29 03:06:00.595574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:44.611 [2024-11-29 03:06:00.595580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:44.611 [2024-11-29 03:06:00.595585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.595999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:44.612 [2024-11-29 03:06:00.596125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:44.613 [2024-11-29 03:06:00.596131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:44.613 [2024-11-29 03:06:00.596136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:44.613 [2024-11-29 03:06:00.596148] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:44.613 [2024-11-29 03:06:00.596155] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d72ba7a5-0ce0-4bc6-a145-7b09d187e338 00:18:44.613 [2024-11-29 03:06:00.596162] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:44.613 [2024-11-29 03:06:00.596168] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:44.613 [2024-11-29 03:06:00.596173] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:44.613 [2024-11-29 03:06:00.596179] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:44.613 [2024-11-29 03:06:00.596185] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:44.613 [2024-11-29 03:06:00.596195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:44.613 [2024-11-29 03:06:00.596201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:44.613 [2024-11-29 03:06:00.596207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:44.613 [2024-11-29 03:06:00.596212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:44.613 [2024-11-29 03:06:00.596217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.613 [2024-11-29 03:06:00.596225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:44.613 [2024-11-29 03:06:00.596231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:18:44.613 [2024-11-29 03:06:00.596237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.874 [2024-11-29 03:06:00.597985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.874 [2024-11-29 03:06:00.598005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:44.875 [2024-11-29 03:06:00.598015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.736 ms 00:18:44.875 [2024-11-29 03:06:00.598021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.875 [2024-11-29 03:06:00.598111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.875 [2024-11-29 03:06:00.598118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:44.875 [2024-11-29 03:06:00.598127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:44.875 [2024-11-29 03:06:00.598132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.875 [2024-11-29 03:06:00.603991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.875 [2024-11-29 03:06:00.604019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:44.875 [2024-11-29 03:06:00.604031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.875 [2024-11-29 03:06:00.604038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.875 [2024-11-29 03:06:00.604098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.875 [2024-11-29 03:06:00.604106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:44.875 [2024-11-29 03:06:00.604112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.875 [2024-11-29 03:06:00.604118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.875 [2024-11-29 03:06:00.604152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.875 [2024-11-29 03:06:00.604160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:44.875 [2024-11-29 03:06:00.604166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.875 [2024-11-29 03:06:00.604172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.875 [2024-11-29 03:06:00.604186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.875 [2024-11-29 03:06:00.604193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:44.875 [2024-11-29 03:06:00.604200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.875 [2024-11-29 03:06:00.604205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.875 [2024-11-29 03:06:00.615073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.875 [2024-11-29 03:06:00.615113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:44.875 [2024-11-29 03:06:00.615122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.875 [2024-11-29 03:06:00.615129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.875 [2024-11-29 03:06:00.623679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.875 [2024-11-29 03:06:00.623714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:44.875 [2024-11-29 03:06:00.623724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.875 [2024-11-29 03:06:00.623730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.875 [2024-11-29 03:06:00.623782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.875 [2024-11-29 03:06:00.623790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:44.875 [2024-11-29 03:06:00.623804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.875 [2024-11-29 03:06:00.623810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.875 [2024-11-29 03:06:00.623852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.875 [2024-11-29 03:06:00.623859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:44.875 [2024-11-29 03:06:00.623869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.875 [2024-11-29 03:06:00.623875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.875 [2024-11-29 03:06:00.623936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.875 [2024-11-29 03:06:00.623944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:44.875 [2024-11-29 03:06:00.623951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.875 [2024-11-29 03:06:00.623963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.875 [2024-11-29 03:06:00.623993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.875 [2024-11-29 03:06:00.624000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:44.875 [2024-11-29 03:06:00.624010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.875 [2024-11-29 03:06:00.624018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.875 [2024-11-29 03:06:00.624057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.875 [2024-11-29 03:06:00.624065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:44.875 [2024-11-29 03:06:00.624072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.875 [2024-11-29 03:06:00.624079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.875 [2024-11-29 03:06:00.624118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.875 [2024-11-29 03:06:00.624126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:44.875 [2024-11-29 03:06:00.624135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.875 [2024-11-29 03:06:00.624141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.875 [2024-11-29 03:06:00.624272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.548 ms, result 0 00:18:44.875 00:18:44.875 00:18:45.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.136 03:06:00 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=87505 00:18:45.136 03:06:00 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 87505 00:18:45.136 03:06:00 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87505 ']' 00:18:45.136 03:06:00 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.136 03:06:00 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:45.136 03:06:00 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.136 03:06:00 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.136 03:06:00 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.136 03:06:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:45.136 [2024-11-29 03:06:00.960131] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:18:45.136 [2024-11-29 03:06:00.960285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87505 ] 00:18:45.136 [2024-11-29 03:06:01.116388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.397 [2024-11-29 03:06:01.146487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.968 03:06:01 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.968 03:06:01 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:45.968 03:06:01 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:46.230 [2024-11-29 03:06:02.021670] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:46.230 [2024-11-29 03:06:02.022127] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:46.230 [2024-11-29 03:06:02.200514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.230 [2024-11-29 03:06:02.200937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:46.230 [2024-11-29 03:06:02.200969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:46.230 [2024-11-29 03:06:02.200981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.230 [2024-11-29 03:06:02.203579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.230 [2024-11-29 03:06:02.203646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:46.230 [2024-11-29 03:06:02.203657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.563 ms 00:18:46.230 [2024-11-29 03:06:02.203671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.230 [2024-11-29 03:06:02.203806] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:46.230 [2024-11-29 03:06:02.204144] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:46.230 [2024-11-29 03:06:02.204170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.230 [2024-11-29 03:06:02.204181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:46.230 [2024-11-29 03:06:02.204192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:18:46.230 [2024-11-29 03:06:02.204203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.230 [2024-11-29 03:06:02.206055] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:46.230 [2024-11-29 03:06:02.210064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.230 [2024-11-29 03:06:02.210119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:46.230 [2024-11-29 03:06:02.210133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.005 ms 00:18:46.230 [2024-11-29 03:06:02.210142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.230 [2024-11-29 03:06:02.210231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.230 [2024-11-29 03:06:02.210241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:46.230 [2024-11-29 03:06:02.210261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:46.230 [2024-11-29 03:06:02.210269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.230 [2024-11-29 03:06:02.219261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.230 [2024-11-29 03:06:02.219447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:46.231 [2024-11-29 03:06:02.219471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.929 ms 00:18:46.231 [2024-11-29 03:06:02.219479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.231 [2024-11-29 03:06:02.219628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.231 [2024-11-29 03:06:02.219640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:46.231 [2024-11-29 03:06:02.219657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:18:46.231 [2024-11-29 03:06:02.219664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.231 [2024-11-29 03:06:02.219695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.231 [2024-11-29 03:06:02.219707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:46.231 [2024-11-29 03:06:02.219718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:46.231 [2024-11-29 03:06:02.219725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.231 [2024-11-29 03:06:02.219752] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:46.495 [2024-11-29 03:06:02.221875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.495 [2024-11-29 03:06:02.221926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:46.495 [2024-11-29 03:06:02.221939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.131 ms 00:18:46.495 [2024-11-29 03:06:02.221948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.495 [2024-11-29 03:06:02.221989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.495 [2024-11-29 03:06:02.222000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:46.495 [2024-11-29 03:06:02.222008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:46.495 [2024-11-29 03:06:02.222017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.495 [2024-11-29 03:06:02.222040] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:46.495 [2024-11-29 03:06:02.222063] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:46.495 [2024-11-29 03:06:02.222100] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:46.495 [2024-11-29 03:06:02.222125] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:46.495 [2024-11-29 03:06:02.222236] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:46.495 [2024-11-29 03:06:02.222249] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:46.495 [2024-11-29 03:06:02.222260] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:46.495 [2024-11-29 03:06:02.222275] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:46.495 [2024-11-29 03:06:02.222285] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:46.495 [2024-11-29 03:06:02.222301] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:46.495 [2024-11-29 03:06:02.222309] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:46.495 [2024-11-29 03:06:02.222325] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:46.495 [2024-11-29 03:06:02.222332] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:46.495 [2024-11-29 03:06:02.222342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.495 [2024-11-29 03:06:02.222350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:46.495 [2024-11-29 03:06:02.222361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:18:46.495 [2024-11-29 03:06:02.222370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.495 [2024-11-29 03:06:02.222459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.495 [2024-11-29 03:06:02.222471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:46.495 [2024-11-29 03:06:02.222480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:46.495 [2024-11-29 03:06:02.222488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.495 [2024-11-29 03:06:02.222595] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:46.495 [2024-11-29 03:06:02.222606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:46.495 [2024-11-29 03:06:02.222617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.495 [2024-11-29 03:06:02.222627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.495 [2024-11-29 03:06:02.222642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:46.495 [2024-11-29 03:06:02.222650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:46.495 [2024-11-29 03:06:02.222659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:46.495 [2024-11-29 03:06:02.222670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:46.495 [2024-11-29 03:06:02.222680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:46.495 [2024-11-29 03:06:02.222688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.495 [2024-11-29 03:06:02.222698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:46.495 [2024-11-29 03:06:02.222706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:46.495 [2024-11-29 03:06:02.222717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.495 [2024-11-29 03:06:02.222725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:46.496 [2024-11-29 03:06:02.222736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:46.496 [2024-11-29 03:06:02.222743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.496 [2024-11-29 03:06:02.222753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:46.496 [2024-11-29 03:06:02.222762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:46.496 [2024-11-29 03:06:02.222772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.496 [2024-11-29 03:06:02.222781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:46.496 [2024-11-29 03:06:02.222793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:46.496 [2024-11-29 03:06:02.222801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.496 [2024-11-29 03:06:02.222810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:46.496 [2024-11-29 03:06:02.222818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:46.496 [2024-11-29 03:06:02.223033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.496 [2024-11-29 03:06:02.223072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:46.496 [2024-11-29 03:06:02.223097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:46.496 [2024-11-29 03:06:02.223117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.496 [2024-11-29 03:06:02.223138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:46.496 [2024-11-29 03:06:02.223157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:46.496 [2024-11-29 03:06:02.223179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.496 [2024-11-29 03:06:02.223197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:46.496 [2024-11-29 03:06:02.223218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:46.496 [2024-11-29 03:06:02.223236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.496 [2024-11-29 03:06:02.223256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:46.496 [2024-11-29 03:06:02.223274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:46.496 [2024-11-29 03:06:02.223296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.496 [2024-11-29 03:06:02.223314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:46.496 [2024-11-29 03:06:02.223396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:46.496 [2024-11-29 03:06:02.223419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.496 [2024-11-29 03:06:02.223440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:46.496 [2024-11-29 03:06:02.223458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:46.496 [2024-11-29 03:06:02.223478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.496 [2024-11-29 03:06:02.223960] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:46.496 [2024-11-29 03:06:02.224032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:46.496 [2024-11-29 03:06:02.224127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.496 [2024-11-29 03:06:02.224143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.496 [2024-11-29 03:06:02.224152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:46.496 [2024-11-29 03:06:02.224162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:46.496 [2024-11-29 03:06:02.224171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:46.496 [2024-11-29 03:06:02.224181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:46.496 [2024-11-29 03:06:02.224188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:46.496 [2024-11-29 03:06:02.224200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:46.496 [2024-11-29 03:06:02.224210] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:46.496 [2024-11-29 03:06:02.224226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.496 [2024-11-29 03:06:02.224242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:46.496 [2024-11-29 03:06:02.224253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:46.496 [2024-11-29 03:06:02.224261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:46.496 [2024-11-29 03:06:02.224271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:46.496 [2024-11-29 03:06:02.224279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:46.496 [2024-11-29 03:06:02.224289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:46.496 [2024-11-29 03:06:02.224296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:46.496 [2024-11-29 03:06:02.224306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:46.496 [2024-11-29 03:06:02.224313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:46.496 [2024-11-29 03:06:02.224322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:46.496 [2024-11-29 03:06:02.224329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:46.496 [2024-11-29 03:06:02.224345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:46.496 [2024-11-29 03:06:02.224352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:46.496 [2024-11-29 03:06:02.224364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:46.496 [2024-11-29 03:06:02.224371] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:46.496 [2024-11-29 03:06:02.224386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.496 [2024-11-29 03:06:02.224397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:46.496 [2024-11-29 03:06:02.224408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:46.496 [2024-11-29 03:06:02.224416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:46.496 [2024-11-29 03:06:02.224425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:46.496 [2024-11-29 03:06:02.224436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.496 [2024-11-29 03:06:02.224447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:46.496 [2024-11-29 03:06:02.224455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:18:46.496 [2024-11-29 03:06:02.224464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.496 [2024-11-29 03:06:02.239132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.496 [2024-11-29 03:06:02.239185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:46.496 [2024-11-29 03:06:02.239199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.557 ms 00:18:46.496 [2024-11-29 03:06:02.239210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.496 [2024-11-29 03:06:02.239354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.496 [2024-11-29 03:06:02.239371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:46.496 [2024-11-29 03:06:02.239385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:46.496 [2024-11-29 03:06:02.239395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.496 [2024-11-29 03:06:02.252104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.496 [2024-11-29 03:06:02.252156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:46.496 [2024-11-29 03:06:02.252167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.686 ms 00:18:46.496 [2024-11-29 03:06:02.252180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.496 [2024-11-29 03:06:02.252253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.496 [2024-11-29 03:06:02.252266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:46.496 [2024-11-29 03:06:02.252275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:46.496 [2024-11-29 03:06:02.252289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.496 [2024-11-29 03:06:02.252776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.496 [2024-11-29 03:06:02.252814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:46.496 [2024-11-29 03:06:02.252825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:18:46.496 [2024-11-29 03:06:02.252853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.496 [2024-11-29 03:06:02.253017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.496 [2024-11-29 03:06:02.253036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:46.496 [2024-11-29 03:06:02.253045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:18:46.496 [2024-11-29 03:06:02.253059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.496 [2024-11-29 03:06:02.261562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.496 [2024-11-29 03:06:02.261721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:46.496 [2024-11-29 03:06:02.262178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.478 ms 00:18:46.496 [2024-11-29 03:06:02.262249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.496 [2024-11-29 03:06:02.277112] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:46.496 [2024-11-29 03:06:02.277411] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:46.496 [2024-11-29 03:06:02.277697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.496 [2024-11-29 03:06:02.277743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:46.496 [2024-11-29 03:06:02.277784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.273 ms 00:18:46.496 [2024-11-29 03:06:02.278011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.497 [2024-11-29 03:06:02.296257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.497 [2024-11-29 03:06:02.296469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:46.497 [2024-11-29 03:06:02.296537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.143 ms 00:18:46.497 [2024-11-29 03:06:02.296574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.497 [2024-11-29 03:06:02.299818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.497 [2024-11-29 03:06:02.300021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:46.497 [2024-11-29 03:06:02.300114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.116 ms 00:18:46.497 [2024-11-29 03:06:02.300142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.497 [2024-11-29 03:06:02.303094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.497 [2024-11-29 03:06:02.303272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:46.497 [2024-11-29 03:06:02.303330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.891 ms 00:18:46.497 [2024-11-29 03:06:02.303355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.497 [2024-11-29 03:06:02.303715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.497 [2024-11-29 03:06:02.303777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:46.497 [2024-11-29 03:06:02.303925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:18:46.497 [2024-11-29 03:06:02.303955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.497 [2024-11-29 03:06:02.327338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.497 [2024-11-29 03:06:02.327537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:46.497 [2024-11-29 03:06:02.327597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.336 ms 00:18:46.497 [2024-11-29 03:06:02.327626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.497 [2024-11-29 03:06:02.335745] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:46.497 [2024-11-29 03:06:02.354382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.497 [2024-11-29 03:06:02.354558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:46.497 [2024-11-29 03:06:02.354619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.648 ms 00:18:46.497 [2024-11-29 03:06:02.354643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.497 [2024-11-29 03:06:02.354759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.497 [2024-11-29 03:06:02.354790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:46.497 [2024-11-29 03:06:02.354814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:46.497 [2024-11-29 03:06:02.354858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.497 [2024-11-29 03:06:02.355021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.497 [2024-11-29 03:06:02.355040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:46.497 [2024-11-29 03:06:02.355052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:46.497 [2024-11-29 03:06:02.355060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.497 [2024-11-29 03:06:02.355092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.497 [2024-11-29 03:06:02.355102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:46.497 [2024-11-29 03:06:02.355117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:46.497 [2024-11-29 03:06:02.355124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.497 [2024-11-29 03:06:02.355167] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:46.497 [2024-11-29 03:06:02.355177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.497 [2024-11-29 03:06:02.355187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:46.497 [2024-11-29 03:06:02.355195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:46.497 [2024-11-29 03:06:02.355205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.497 [2024-11-29 03:06:02.361222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.497 [2024-11-29 03:06:02.361385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:46.497 [2024-11-29 03:06:02.361403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.993 ms 00:18:46.497 [2024-11-29 03:06:02.361416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.497 [2024-11-29 03:06:02.361512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.497 [2024-11-29 03:06:02.361524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:46.497 [2024-11-29 03:06:02.361534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:46.497 [2024-11-29 03:06:02.361543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.497 [2024-11-29 03:06:02.362588] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:46.497 [2024-11-29 03:06:02.363970] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 161.767 ms, result 0 00:18:46.497 [2024-11-29 03:06:02.365493] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:46.497 Some configs were skipped because the RPC state that can call them passed over. 00:18:46.497 03:06:02 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:46.758 [2024-11-29 03:06:02.603743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.758 [2024-11-29 03:06:02.603948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:46.758 [2024-11-29 03:06:02.604027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.119 ms 00:18:46.758 [2024-11-29 03:06:02.604053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.758 [2024-11-29 03:06:02.604115] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.500 ms, result 0 00:18:46.758 true 00:18:46.758 03:06:02 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:47.020 [2024-11-29 03:06:02.819699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.020 [2024-11-29 03:06:02.819909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:47.020 [2024-11-29 03:06:02.820164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.821 ms 00:18:47.020 true 00:18:47.020 [2024-11-29 03:06:02.820215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.020 [2024-11-29 03:06:02.820272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.390 ms, result 0 00:18:47.020 03:06:02 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 87505 00:18:47.020 03:06:02 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87505 ']' 00:18:47.020 03:06:02 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87505 00:18:47.020 03:06:02 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:18:47.020 03:06:02 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.020 03:06:02 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87505 00:18:47.020 03:06:02 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.020 03:06:02 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.020 03:06:02 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87505' 00:18:47.020 killing process with pid 87505 00:18:47.020 03:06:02 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87505 00:18:47.020 03:06:02 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87505 00:18:47.020 [2024-11-29 03:06:02.995233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.020 [2024-11-29 03:06:02.995442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:47.020 [2024-11-29 03:06:02.995503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:47.020 [2024-11-29 03:06:02.995528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.020 [2024-11-29 03:06:02.995576] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:47.020 [2024-11-29 03:06:02.996100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.020 [2024-11-29 03:06:02.996218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:47.020 [2024-11-29 03:06:02.996273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:18:47.020 [2024-11-29 03:06:02.996297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.020 [2024-11-29 03:06:02.996606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.020 [2024-11-29 03:06:02.996678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:47.020 [2024-11-29 03:06:02.996706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:18:47.020 [2024-11-29 03:06:02.996727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.020 [2024-11-29 03:06:03.001169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.020 [2024-11-29 03:06:03.001279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:47.020 [2024-11-29 03:06:03.001331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.386 ms 00:18:47.020 [2024-11-29 03:06:03.001360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.020 [2024-11-29 03:06:03.008380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.020 [2024-11-29 03:06:03.008500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:47.020 [2024-11-29 03:06:03.008553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.974 ms 00:18:47.020 [2024-11-29 03:06:03.008931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.282 [2024-11-29 03:06:03.011772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.282 [2024-11-29 03:06:03.011912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:47.282 [2024-11-29 03:06:03.011929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.646 ms 00:18:47.282 [2024-11-29 03:06:03.011939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.282 [2024-11-29 03:06:03.016463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.282 [2024-11-29 03:06:03.016509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:47.282 [2024-11-29 03:06:03.016521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.488 ms 00:18:47.282 [2024-11-29 03:06:03.016531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.282 [2024-11-29 03:06:03.016660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.282 [2024-11-29 03:06:03.016672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:47.282 [2024-11-29 03:06:03.016681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:18:47.282 [2024-11-29 03:06:03.016690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.282 [2024-11-29 03:06:03.019506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.282 [2024-11-29 03:06:03.019546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:47.282 [2024-11-29 03:06:03.019555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.798 ms 00:18:47.282 [2024-11-29 03:06:03.019569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.282 [2024-11-29 03:06:03.021587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.282 [2024-11-29 03:06:03.021625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:47.282 [2024-11-29 03:06:03.021634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.980 ms 00:18:47.282 [2024-11-29 03:06:03.021643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.282 [2024-11-29 03:06:03.023435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.282 [2024-11-29 03:06:03.023476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:47.282 [2024-11-29 03:06:03.023485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.756 ms 00:18:47.283 [2024-11-29 03:06:03.023494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.283 [2024-11-29 03:06:03.025227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.283 [2024-11-29 03:06:03.025283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:47.283 [2024-11-29 03:06:03.025293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:18:47.283 [2024-11-29 03:06:03.025302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.283 [2024-11-29 03:06:03.025349] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:47.283 [2024-11-29 03:06:03.025366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.025997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.026004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.026013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.026020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.026029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.026036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.026045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.026052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.026061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.026068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.026077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.026084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.026094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.026101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:47.283 [2024-11-29 03:06:03.026110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:47.284 [2024-11-29 03:06:03.026253] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:47.284 [2024-11-29 03:06:03.026260] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d72ba7a5-0ce0-4bc6-a145-7b09d187e338 00:18:47.284 [2024-11-29 03:06:03.026272] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:47.284 [2024-11-29 03:06:03.026280] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:47.284 [2024-11-29 03:06:03.026288] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:47.284 [2024-11-29 03:06:03.026295] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:47.284 [2024-11-29 03:06:03.026303] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:47.284 [2024-11-29 03:06:03.026316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:47.284 [2024-11-29 03:06:03.026325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:47.284 [2024-11-29 03:06:03.026331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:47.284 [2024-11-29 03:06:03.026340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:47.284 [2024-11-29 03:06:03.026347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.284 [2024-11-29 03:06:03.026359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:47.284 [2024-11-29 03:06:03.026368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:18:47.284 [2024-11-29 03:06:03.026378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.284 [2024-11-29 03:06:03.028191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.284 [2024-11-29 03:06:03.028222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:47.284 [2024-11-29 03:06:03.028235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.794 ms 00:18:47.284 [2024-11-29 03:06:03.028245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.284 [2024-11-29 03:06:03.028336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.284 [2024-11-29 03:06:03.028346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:47.284 [2024-11-29 03:06:03.028355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:47.284 [2024-11-29 03:06:03.028363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.284 [2024-11-29 03:06:03.034324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.284 [2024-11-29 03:06:03.034459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:47.284 [2024-11-29 03:06:03.034474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.284 [2024-11-29 03:06:03.034483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.284 [2024-11-29 03:06:03.034563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.284 [2024-11-29 03:06:03.034575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:47.284 [2024-11-29 03:06:03.034584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.284 [2024-11-29 03:06:03.034595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.284 [2024-11-29 03:06:03.034638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.284 [2024-11-29 03:06:03.034649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:47.284 [2024-11-29 03:06:03.034656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.284 [2024-11-29 03:06:03.034665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.284 [2024-11-29 03:06:03.034682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.284 [2024-11-29 03:06:03.034692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:47.284 [2024-11-29 03:06:03.034699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.284 [2024-11-29 03:06:03.034708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.284 [2024-11-29 03:06:03.045130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.284 [2024-11-29 03:06:03.045288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:47.284 [2024-11-29 03:06:03.045307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.284 [2024-11-29 03:06:03.045323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.284 [2024-11-29 03:06:03.053186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.284 [2024-11-29 03:06:03.053314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:47.284 [2024-11-29 03:06:03.053366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.284 [2024-11-29 03:06:03.053394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.284 [2024-11-29 03:06:03.053453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.284 [2024-11-29 03:06:03.053478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:47.284 [2024-11-29 03:06:03.053521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.284 [2024-11-29 03:06:03.053544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.284 [2024-11-29 03:06:03.053587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.284 [2024-11-29 03:06:03.053716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:47.284 [2024-11-29 03:06:03.053736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.284 [2024-11-29 03:06:03.053756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.284 [2024-11-29 03:06:03.053900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.284 [2024-11-29 03:06:03.053931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:47.284 [2024-11-29 03:06:03.053951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.284 [2024-11-29 03:06:03.054033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.284 [2024-11-29 03:06:03.054079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.284 [2024-11-29 03:06:03.054090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:47.284 [2024-11-29 03:06:03.054098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.284 [2024-11-29 03:06:03.054110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.284 [2024-11-29 03:06:03.054150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.284 [2024-11-29 03:06:03.054163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:47.284 [2024-11-29 03:06:03.054171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.284 [2024-11-29 03:06:03.054180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.284 [2024-11-29 03:06:03.054226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.284 [2024-11-29 03:06:03.054238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:47.284 [2024-11-29 03:06:03.054246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.284 [2024-11-29 03:06:03.054255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.284 [2024-11-29 03:06:03.054394] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.131 ms, result 0 00:18:47.284 03:06:03 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:47.284 03:06:03 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:47.546 [2024-11-29 03:06:03.305189] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:18:47.546 [2024-11-29 03:06:03.305338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87551 ] 00:18:47.546 [2024-11-29 03:06:03.452898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.546 [2024-11-29 03:06:03.482067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.809 [2024-11-29 03:06:03.594176] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:47.809 [2024-11-29 03:06:03.594253] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:47.809 [2024-11-29 03:06:03.755936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.809 [2024-11-29 03:06:03.755998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:47.809 [2024-11-29 03:06:03.756014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:47.809 [2024-11-29 03:06:03.756023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.809 [2024-11-29 03:06:03.758666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.809 [2024-11-29 03:06:03.758728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:47.809 [2024-11-29 03:06:03.758740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.621 ms 00:18:47.809 [2024-11-29 03:06:03.758750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.809 [2024-11-29 03:06:03.758899] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:47.809 [2024-11-29 03:06:03.759204] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:47.809 [2024-11-29 03:06:03.759223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.809 [2024-11-29 03:06:03.759233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:47.809 [2024-11-29 03:06:03.759247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:18:47.809 [2024-11-29 03:06:03.759258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.809 [2024-11-29 03:06:03.761535] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:47.809 [2024-11-29 03:06:03.765563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.809 [2024-11-29 03:06:03.765619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:47.809 [2024-11-29 03:06:03.765638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.065 ms 00:18:47.809 [2024-11-29 03:06:03.765647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.809 [2024-11-29 03:06:03.765745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.809 [2024-11-29 03:06:03.765757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:47.809 [2024-11-29 03:06:03.765767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:47.809 [2024-11-29 03:06:03.765775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.809 [2024-11-29 03:06:03.774362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.809 [2024-11-29 03:06:03.774406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:47.809 [2024-11-29 03:06:03.774421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.539 ms 00:18:47.809 [2024-11-29 03:06:03.774429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.809 [2024-11-29 03:06:03.774573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.809 [2024-11-29 03:06:03.774585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:47.809 [2024-11-29 03:06:03.774601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:47.809 [2024-11-29 03:06:03.774612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.809 [2024-11-29 03:06:03.774639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.809 [2024-11-29 03:06:03.774651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:47.809 [2024-11-29 03:06:03.774660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:47.809 [2024-11-29 03:06:03.774667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.809 [2024-11-29 03:06:03.774694] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:47.809 [2024-11-29 03:06:03.776912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.809 [2024-11-29 03:06:03.777087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:47.809 [2024-11-29 03:06:03.777114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.225 ms 00:18:47.809 [2024-11-29 03:06:03.777126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.809 [2024-11-29 03:06:03.777177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.809 [2024-11-29 03:06:03.777187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:47.809 [2024-11-29 03:06:03.777197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:47.809 [2024-11-29 03:06:03.777206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.809 [2024-11-29 03:06:03.777225] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:47.809 [2024-11-29 03:06:03.777248] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:47.809 [2024-11-29 03:06:03.777295] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:47.809 [2024-11-29 03:06:03.777313] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:47.809 [2024-11-29 03:06:03.777420] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:47.810 [2024-11-29 03:06:03.777434] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:47.810 [2024-11-29 03:06:03.777446] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:47.810 [2024-11-29 03:06:03.777456] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:47.810 [2024-11-29 03:06:03.777464] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:47.810 [2024-11-29 03:06:03.777472] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:47.810 [2024-11-29 03:06:03.777481] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:47.810 [2024-11-29 03:06:03.777488] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:47.810 [2024-11-29 03:06:03.777513] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:47.810 [2024-11-29 03:06:03.777524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.810 [2024-11-29 03:06:03.777532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:47.810 [2024-11-29 03:06:03.777540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:18:47.810 [2024-11-29 03:06:03.777548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.810 [2024-11-29 03:06:03.777636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.810 [2024-11-29 03:06:03.777645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:47.810 [2024-11-29 03:06:03.777653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:47.810 [2024-11-29 03:06:03.777664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.810 [2024-11-29 03:06:03.777763] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:47.810 [2024-11-29 03:06:03.777776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:47.810 [2024-11-29 03:06:03.777785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:47.810 [2024-11-29 03:06:03.777793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.810 [2024-11-29 03:06:03.777801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:47.810 [2024-11-29 03:06:03.777808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:47.810 [2024-11-29 03:06:03.777815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:47.810 [2024-11-29 03:06:03.777824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:47.810 [2024-11-29 03:06:03.777850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:47.810 [2024-11-29 03:06:03.777857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:47.810 [2024-11-29 03:06:03.777863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:47.810 [2024-11-29 03:06:03.777869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:47.810 [2024-11-29 03:06:03.777875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:47.810 [2024-11-29 03:06:03.777883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:47.810 [2024-11-29 03:06:03.777891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:47.810 [2024-11-29 03:06:03.777897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.810 [2024-11-29 03:06:03.777906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:47.810 [2024-11-29 03:06:03.777913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:47.810 [2024-11-29 03:06:03.777919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.810 [2024-11-29 03:06:03.777926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:47.810 [2024-11-29 03:06:03.777933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:47.810 [2024-11-29 03:06:03.777939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.810 [2024-11-29 03:06:03.777946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:47.810 [2024-11-29 03:06:03.777960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:47.810 [2024-11-29 03:06:03.777966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.810 [2024-11-29 03:06:03.777973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:47.810 [2024-11-29 03:06:03.777979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:47.810 [2024-11-29 03:06:03.777986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.810 [2024-11-29 03:06:03.777992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:47.810 [2024-11-29 03:06:03.777999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:47.810 [2024-11-29 03:06:03.778005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.810 [2024-11-29 03:06:03.778011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:47.810 [2024-11-29 03:06:03.778018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:47.810 [2024-11-29 03:06:03.778025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:47.810 [2024-11-29 03:06:03.778031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:47.810 [2024-11-29 03:06:03.778038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:47.810 [2024-11-29 03:06:03.778044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:47.810 [2024-11-29 03:06:03.778050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:47.810 [2024-11-29 03:06:03.778057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:47.810 [2024-11-29 03:06:03.778066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.810 [2024-11-29 03:06:03.778073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:47.810 [2024-11-29 03:06:03.778079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:47.810 [2024-11-29 03:06:03.778085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.810 [2024-11-29 03:06:03.778092] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:47.810 [2024-11-29 03:06:03.778101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:47.810 [2024-11-29 03:06:03.778108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:47.810 [2024-11-29 03:06:03.778119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.810 [2024-11-29 03:06:03.778127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:47.810 [2024-11-29 03:06:03.778136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:47.810 [2024-11-29 03:06:03.778143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:47.810 [2024-11-29 03:06:03.778150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:47.810 [2024-11-29 03:06:03.778156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:47.810 [2024-11-29 03:06:03.778163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:47.810 [2024-11-29 03:06:03.778171] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:47.810 [2024-11-29 03:06:03.778184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:47.810 [2024-11-29 03:06:03.778194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:47.810 [2024-11-29 03:06:03.778201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:47.810 [2024-11-29 03:06:03.778209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:47.810 [2024-11-29 03:06:03.778216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:47.810 [2024-11-29 03:06:03.778223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:47.810 [2024-11-29 03:06:03.778230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:47.810 [2024-11-29 03:06:03.778237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:47.810 [2024-11-29 03:06:03.778250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:47.810 [2024-11-29 03:06:03.778257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:47.810 [2024-11-29 03:06:03.778264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:47.810 [2024-11-29 03:06:03.778271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:47.810 [2024-11-29 03:06:03.778278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:47.810 [2024-11-29 03:06:03.778284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:47.810 [2024-11-29 03:06:03.778292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:47.810 [2024-11-29 03:06:03.778300] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:47.810 [2024-11-29 03:06:03.778311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:47.810 [2024-11-29 03:06:03.778322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:47.810 [2024-11-29 03:06:03.778329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:47.810 [2024-11-29 03:06:03.778336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:47.810 [2024-11-29 03:06:03.778343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:47.810 [2024-11-29 03:06:03.778351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.810 [2024-11-29 03:06:03.778362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:47.810 [2024-11-29 03:06:03.778370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:18:47.810 [2024-11-29 03:06:03.778377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.810 [2024-11-29 03:06:03.792487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.810 [2024-11-29 03:06:03.792645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:47.810 [2024-11-29 03:06:03.792699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.057 ms 00:18:47.810 [2024-11-29 03:06:03.792723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.810 [2024-11-29 03:06:03.792882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.811 [2024-11-29 03:06:03.792918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:47.811 [2024-11-29 03:06:03.792940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:47.811 [2024-11-29 03:06:03.792958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.073 [2024-11-29 03:06:03.819123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.073 [2024-11-29 03:06:03.819426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:48.073 [2024-11-29 03:06:03.819641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.124 ms 00:18:48.073 [2024-11-29 03:06:03.819698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.073 [2024-11-29 03:06:03.819942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.073 [2024-11-29 03:06:03.820026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:48.073 [2024-11-29 03:06:03.820049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:48.073 [2024-11-29 03:06:03.820456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.073 [2024-11-29 03:06:03.821072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.073 [2024-11-29 03:06:03.821217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:48.073 [2024-11-29 03:06:03.821297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:18:48.073 [2024-11-29 03:06:03.821322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.073 [2024-11-29 03:06:03.821683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.073 [2024-11-29 03:06:03.821805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:48.073 [2024-11-29 03:06:03.821882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:18:48.073 [2024-11-29 03:06:03.821907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.073 [2024-11-29 03:06:03.830416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.073 [2024-11-29 03:06:03.830570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:48.073 [2024-11-29 03:06:03.830640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.468 ms 00:18:48.073 [2024-11-29 03:06:03.830663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.073 [2024-11-29 03:06:03.834528] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:48.073 [2024-11-29 03:06:03.834701] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:48.073 [2024-11-29 03:06:03.834765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.073 [2024-11-29 03:06:03.834786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:48.073 [2024-11-29 03:06:03.834806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.953 ms 00:18:48.073 [2024-11-29 03:06:03.834870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.073 [2024-11-29 03:06:03.850889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.073 [2024-11-29 03:06:03.851067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:48.073 [2024-11-29 03:06:03.851126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.932 ms 00:18:48.073 [2024-11-29 03:06:03.851149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.073 [2024-11-29 03:06:03.854328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.073 [2024-11-29 03:06:03.854489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:48.073 [2024-11-29 03:06:03.854544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.049 ms 00:18:48.074 [2024-11-29 03:06:03.854565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.074 [2024-11-29 03:06:03.857538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.074 [2024-11-29 03:06:03.857699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:48.074 [2024-11-29 03:06:03.857752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:18:48.074 [2024-11-29 03:06:03.857773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.074 [2024-11-29 03:06:03.858224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.074 [2024-11-29 03:06:03.858295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:48.074 [2024-11-29 03:06:03.858413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:18:48.074 [2024-11-29 03:06:03.858438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.074 [2024-11-29 03:06:03.880189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.074 [2024-11-29 03:06:03.880391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:48.074 [2024-11-29 03:06:03.880449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.706 ms 00:18:48.074 [2024-11-29 03:06:03.880473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.074 [2024-11-29 03:06:03.888620] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:48.074 [2024-11-29 03:06:03.907084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.074 [2024-11-29 03:06:03.907278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:48.074 [2024-11-29 03:06:03.907331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.519 ms 00:18:48.074 [2024-11-29 03:06:03.907354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.074 [2024-11-29 03:06:03.907458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.074 [2024-11-29 03:06:03.907487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:48.074 [2024-11-29 03:06:03.907516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:48.074 [2024-11-29 03:06:03.907535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.074 [2024-11-29 03:06:03.907609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.074 [2024-11-29 03:06:03.907703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:48.074 [2024-11-29 03:06:03.907723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:48.074 [2024-11-29 03:06:03.907742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.074 [2024-11-29 03:06:03.907780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.074 [2024-11-29 03:06:03.907802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:48.074 [2024-11-29 03:06:03.907911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:48.074 [2024-11-29 03:06:03.907944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.074 [2024-11-29 03:06:03.907999] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:48.074 [2024-11-29 03:06:03.908107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.074 [2024-11-29 03:06:03.908118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:48.074 [2024-11-29 03:06:03.908134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:18:48.074 [2024-11-29 03:06:03.908143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.074 [2024-11-29 03:06:03.913680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.074 [2024-11-29 03:06:03.913724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:48.074 [2024-11-29 03:06:03.913735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.508 ms 00:18:48.074 [2024-11-29 03:06:03.913743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.074 [2024-11-29 03:06:03.913869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.074 [2024-11-29 03:06:03.913881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:48.074 [2024-11-29 03:06:03.913894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:48.074 [2024-11-29 03:06:03.913902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.074 [2024-11-29 03:06:03.914900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:48.074 [2024-11-29 03:06:03.916191] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.633 ms, result 0 00:18:48.074 [2024-11-29 03:06:03.917375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:48.074 [2024-11-29 03:06:03.924786] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:49.020  [2024-11-29T03:06:05.957Z] Copying: 14/256 [MB] (14 MBps) [2024-11-29T03:06:07.347Z] Copying: 25/256 [MB] (11 MBps) [2024-11-29T03:06:08.292Z] Copying: 35/256 [MB] (10 MBps) [2024-11-29T03:06:09.237Z] Copying: 45/256 [MB] (10 MBps) [2024-11-29T03:06:10.226Z] Copying: 56/256 [MB] (10 MBps) [2024-11-29T03:06:11.196Z] Copying: 67/256 [MB] (11 MBps) [2024-11-29T03:06:12.139Z] Copying: 79/256 [MB] (11 MBps) [2024-11-29T03:06:13.080Z] Copying: 91/256 [MB] (11 MBps) [2024-11-29T03:06:14.022Z] Copying: 102/256 [MB] (11 MBps) [2024-11-29T03:06:14.963Z] Copying: 113/256 [MB] (10 MBps) [2024-11-29T03:06:16.350Z] Copying: 125/256 [MB] (11 MBps) [2024-11-29T03:06:17.294Z] Copying: 136/256 [MB] (11 MBps) [2024-11-29T03:06:18.238Z] Copying: 148/256 [MB] (11 MBps) [2024-11-29T03:06:19.180Z] Copying: 159/256 [MB] (10 MBps) [2024-11-29T03:06:20.123Z] Copying: 170/256 [MB] (10 MBps) [2024-11-29T03:06:21.065Z] Copying: 181/256 [MB] (11 MBps) [2024-11-29T03:06:22.011Z] Copying: 192/256 [MB] (11 MBps) [2024-11-29T03:06:22.957Z] Copying: 203/256 [MB] (10 MBps) [2024-11-29T03:06:24.344Z] Copying: 214/256 [MB] (10 MBps) [2024-11-29T03:06:25.288Z] Copying: 224/256 [MB] (10 MBps) [2024-11-29T03:06:26.230Z] Copying: 236/256 [MB] (11 MBps) [2024-11-29T03:06:26.805Z] Copying: 247/256 [MB] (11 MBps) [2024-11-29T03:06:26.805Z] Copying: 256/256 [MB] (average 11 MBps)[2024-11-29 03:06:26.766248] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:10.813 [2024-11-29 03:06:26.768197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.813 [2024-11-29 03:06:26.768254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:10.813 [2024-11-29 03:06:26.768271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:10.813 [2024-11-29 03:06:26.768280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.813 [2024-11-29 03:06:26.768305] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:10.813 [2024-11-29 03:06:26.769072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.813 [2024-11-29 03:06:26.769121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:10.813 [2024-11-29 03:06:26.769134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:19:10.813 [2024-11-29 03:06:26.769143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.813 [2024-11-29 03:06:26.769415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.813 [2024-11-29 03:06:26.769435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:10.813 [2024-11-29 03:06:26.769449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:19:10.813 [2024-11-29 03:06:26.769459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.813 [2024-11-29 03:06:26.773188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.813 [2024-11-29 03:06:26.773211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:10.813 [2024-11-29 03:06:26.773222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.712 ms 00:19:10.813 [2024-11-29 03:06:26.773230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.813 [2024-11-29 03:06:26.780544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.813 [2024-11-29 03:06:26.780599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:10.813 [2024-11-29 03:06:26.780611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.278 ms 00:19:10.813 [2024-11-29 03:06:26.780624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.813 [2024-11-29 03:06:26.783439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.813 [2024-11-29 03:06:26.783647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:10.813 [2024-11-29 03:06:26.783666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.757 ms 00:19:10.813 [2024-11-29 03:06:26.783674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.813 [2024-11-29 03:06:26.788846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.813 [2024-11-29 03:06:26.788914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:10.813 [2024-11-29 03:06:26.788928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.754 ms 00:19:10.813 [2024-11-29 03:06:26.788937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.813 [2024-11-29 03:06:26.789079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.813 [2024-11-29 03:06:26.789090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:10.813 [2024-11-29 03:06:26.789107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:10.813 [2024-11-29 03:06:26.789115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.813 [2024-11-29 03:06:26.792349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.813 [2024-11-29 03:06:26.792402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:10.813 [2024-11-29 03:06:26.792412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.214 ms 00:19:10.813 [2024-11-29 03:06:26.792420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.813 [2024-11-29 03:06:26.795211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.813 [2024-11-29 03:06:26.795260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:10.813 [2024-11-29 03:06:26.795270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.742 ms 00:19:10.813 [2024-11-29 03:06:26.795277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.813 [2024-11-29 03:06:26.798257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.813 [2024-11-29 03:06:26.798468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:10.813 [2024-11-29 03:06:26.798489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.927 ms 00:19:10.813 [2024-11-29 03:06:26.798497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.813 [2024-11-29 03:06:26.801190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.813 [2024-11-29 03:06:26.801246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:10.813 [2024-11-29 03:06:26.801257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.527 ms 00:19:10.813 [2024-11-29 03:06:26.801265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.813 [2024-11-29 03:06:26.801309] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:10.813 [2024-11-29 03:06:26.801324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:10.813 [2024-11-29 03:06:26.801654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:10.814 [2024-11-29 03:06:26.801817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:11.077 [2024-11-29 03:06:26.802879] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:11.077 [2024-11-29 03:06:26.802887] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d72ba7a5-0ce0-4bc6-a145-7b09d187e338 00:19:11.077 [2024-11-29 03:06:26.802895] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:11.077 [2024-11-29 03:06:26.802903] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:11.077 [2024-11-29 03:06:26.802911] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:11.077 [2024-11-29 03:06:26.802919] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:11.077 [2024-11-29 03:06:26.802928] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:11.077 [2024-11-29 03:06:26.802942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:11.077 [2024-11-29 03:06:26.802950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:11.077 [2024-11-29 03:06:26.802957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:11.077 [2024-11-29 03:06:26.802963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:11.077 [2024-11-29 03:06:26.802971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.077 [2024-11-29 03:06:26.802988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:11.077 [2024-11-29 03:06:26.802997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.662 ms 00:19:11.077 [2024-11-29 03:06:26.803005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.077 [2024-11-29 03:06:26.805363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.077 [2024-11-29 03:06:26.805396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:11.077 [2024-11-29 03:06:26.805408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.331 ms 00:19:11.077 [2024-11-29 03:06:26.805423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.077 [2024-11-29 03:06:26.805611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.077 [2024-11-29 03:06:26.805623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:11.077 [2024-11-29 03:06:26.805633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:19:11.077 [2024-11-29 03:06:26.805641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.077 [2024-11-29 03:06:26.814148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.077 [2024-11-29 03:06:26.814317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:11.077 [2024-11-29 03:06:26.814374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.077 [2024-11-29 03:06:26.814410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.077 [2024-11-29 03:06:26.814498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.077 [2024-11-29 03:06:26.814520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:11.077 [2024-11-29 03:06:26.814540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.077 [2024-11-29 03:06:26.814559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.077 [2024-11-29 03:06:26.814627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.077 [2024-11-29 03:06:26.814727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:11.077 [2024-11-29 03:06:26.814747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.077 [2024-11-29 03:06:26.814767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.077 [2024-11-29 03:06:26.814801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.077 [2024-11-29 03:06:26.814823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:11.077 [2024-11-29 03:06:26.814848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.077 [2024-11-29 03:06:26.814856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.077 [2024-11-29 03:06:26.829040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.077 [2024-11-29 03:06:26.829203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:11.077 [2024-11-29 03:06:26.829222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.077 [2024-11-29 03:06:26.829236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.077 [2024-11-29 03:06:26.839677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.077 [2024-11-29 03:06:26.839728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:11.077 [2024-11-29 03:06:26.839740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.077 [2024-11-29 03:06:26.839759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.077 [2024-11-29 03:06:26.839815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.077 [2024-11-29 03:06:26.839847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:11.077 [2024-11-29 03:06:26.839858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.077 [2024-11-29 03:06:26.839867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.077 [2024-11-29 03:06:26.839900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.077 [2024-11-29 03:06:26.839914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:11.078 [2024-11-29 03:06:26.839924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.078 [2024-11-29 03:06:26.839933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.078 [2024-11-29 03:06:26.840029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.078 [2024-11-29 03:06:26.840041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:11.078 [2024-11-29 03:06:26.840058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.078 [2024-11-29 03:06:26.840067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.078 [2024-11-29 03:06:26.840105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.078 [2024-11-29 03:06:26.840118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:11.078 [2024-11-29 03:06:26.840128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.078 [2024-11-29 03:06:26.840137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.078 [2024-11-29 03:06:26.840183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.078 [2024-11-29 03:06:26.840194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:11.078 [2024-11-29 03:06:26.840207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.078 [2024-11-29 03:06:26.840216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.078 [2024-11-29 03:06:26.840268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.078 [2024-11-29 03:06:26.840282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:11.078 [2024-11-29 03:06:26.840292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.078 [2024-11-29 03:06:26.840302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.078 [2024-11-29 03:06:26.840461] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.233 ms, result 0 00:19:11.078 00:19:11.078 00:19:11.078 03:06:27 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:11.078 03:06:27 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:11.650 03:06:27 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:11.911 [2024-11-29 03:06:27.666930] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:11.911 [2024-11-29 03:06:27.667090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87800 ] 00:19:11.911 [2024-11-29 03:06:27.816408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.911 [2024-11-29 03:06:27.846965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.173 [2024-11-29 03:06:27.971156] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:12.173 [2024-11-29 03:06:27.971242] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:12.173 [2024-11-29 03:06:28.132940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.173 [2024-11-29 03:06:28.133167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:12.173 [2024-11-29 03:06:28.133194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:12.174 [2024-11-29 03:06:28.133203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.174 [2024-11-29 03:06:28.135819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.174 [2024-11-29 03:06:28.135896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.174 [2024-11-29 03:06:28.135908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:19:12.174 [2024-11-29 03:06:28.135916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.174 [2024-11-29 03:06:28.136037] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:12.174 [2024-11-29 03:06:28.136307] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:12.174 [2024-11-29 03:06:28.136324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.174 [2024-11-29 03:06:28.136332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.174 [2024-11-29 03:06:28.136342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:19:12.174 [2024-11-29 03:06:28.136350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.174 [2024-11-29 03:06:28.138467] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:12.174 [2024-11-29 03:06:28.142434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.174 [2024-11-29 03:06:28.142610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:12.174 [2024-11-29 03:06:28.142692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.971 ms 00:19:12.174 [2024-11-29 03:06:28.142718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.174 [2024-11-29 03:06:28.142921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.174 [2024-11-29 03:06:28.143173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:12.174 [2024-11-29 03:06:28.143206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:12.174 [2024-11-29 03:06:28.143234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.174 [2024-11-29 03:06:28.151656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.174 [2024-11-29 03:06:28.151823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.174 [2024-11-29 03:06:28.151946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.353 ms 00:19:12.174 [2024-11-29 03:06:28.151971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.174 [2024-11-29 03:06:28.152138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.174 [2024-11-29 03:06:28.152169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.174 [2024-11-29 03:06:28.152191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:12.174 [2024-11-29 03:06:28.152387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.174 [2024-11-29 03:06:28.152445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.174 [2024-11-29 03:06:28.152456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:12.174 [2024-11-29 03:06:28.152467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:12.174 [2024-11-29 03:06:28.152474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.174 [2024-11-29 03:06:28.152499] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:12.174 [2024-11-29 03:06:28.154588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.174 [2024-11-29 03:06:28.154631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.174 [2024-11-29 03:06:28.154642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.097 ms 00:19:12.174 [2024-11-29 03:06:28.154660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.174 [2024-11-29 03:06:28.154706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.174 [2024-11-29 03:06:28.154715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:12.174 [2024-11-29 03:06:28.154724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:12.174 [2024-11-29 03:06:28.154732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.174 [2024-11-29 03:06:28.154757] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:12.174 [2024-11-29 03:06:28.154782] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:12.174 [2024-11-29 03:06:28.154854] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:12.174 [2024-11-29 03:06:28.154874] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:12.174 [2024-11-29 03:06:28.154984] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:12.174 [2024-11-29 03:06:28.154996] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:12.174 [2024-11-29 03:06:28.155011] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:12.174 [2024-11-29 03:06:28.155025] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:12.174 [2024-11-29 03:06:28.155035] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:12.174 [2024-11-29 03:06:28.155042] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:12.174 [2024-11-29 03:06:28.155050] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:12.174 [2024-11-29 03:06:28.155058] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:12.174 [2024-11-29 03:06:28.155071] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:12.174 [2024-11-29 03:06:28.155081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.174 [2024-11-29 03:06:28.155089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:12.174 [2024-11-29 03:06:28.155097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:19:12.174 [2024-11-29 03:06:28.155106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.174 [2024-11-29 03:06:28.155193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.174 [2024-11-29 03:06:28.155206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:12.174 [2024-11-29 03:06:28.155217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:12.174 [2024-11-29 03:06:28.155225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.174 [2024-11-29 03:06:28.155325] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:12.174 [2024-11-29 03:06:28.155338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:12.174 [2024-11-29 03:06:28.155346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.174 [2024-11-29 03:06:28.155353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.174 [2024-11-29 03:06:28.155361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:12.174 [2024-11-29 03:06:28.155368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:12.174 [2024-11-29 03:06:28.155376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:12.174 [2024-11-29 03:06:28.155385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:12.174 [2024-11-29 03:06:28.155392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:12.174 [2024-11-29 03:06:28.155398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.174 [2024-11-29 03:06:28.155404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:12.174 [2024-11-29 03:06:28.155411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:12.174 [2024-11-29 03:06:28.155418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.174 [2024-11-29 03:06:28.155425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:12.174 [2024-11-29 03:06:28.155432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:12.174 [2024-11-29 03:06:28.155439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.174 [2024-11-29 03:06:28.155445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:12.174 [2024-11-29 03:06:28.155452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:12.174 [2024-11-29 03:06:28.155458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.175 [2024-11-29 03:06:28.155465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:12.175 [2024-11-29 03:06:28.155474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:12.175 [2024-11-29 03:06:28.155482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.175 [2024-11-29 03:06:28.155488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:12.175 [2024-11-29 03:06:28.155499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:12.175 [2024-11-29 03:06:28.155506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.175 [2024-11-29 03:06:28.155512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:12.175 [2024-11-29 03:06:28.155519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:12.175 [2024-11-29 03:06:28.155526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.175 [2024-11-29 03:06:28.155533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:12.175 [2024-11-29 03:06:28.155539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:12.175 [2024-11-29 03:06:28.155545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.175 [2024-11-29 03:06:28.155551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:12.175 [2024-11-29 03:06:28.155557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:12.175 [2024-11-29 03:06:28.155564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.175 [2024-11-29 03:06:28.155571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:12.175 [2024-11-29 03:06:28.155578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:12.175 [2024-11-29 03:06:28.155584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.175 [2024-11-29 03:06:28.155591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:12.175 [2024-11-29 03:06:28.155597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:12.175 [2024-11-29 03:06:28.155606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.175 [2024-11-29 03:06:28.155612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:12.175 [2024-11-29 03:06:28.155619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:12.175 [2024-11-29 03:06:28.155626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.175 [2024-11-29 03:06:28.155632] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:12.175 [2024-11-29 03:06:28.155640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:12.175 [2024-11-29 03:06:28.155647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.175 [2024-11-29 03:06:28.155654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.175 [2024-11-29 03:06:28.155662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:12.175 [2024-11-29 03:06:28.155668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:12.175 [2024-11-29 03:06:28.155674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:12.175 [2024-11-29 03:06:28.155681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:12.175 [2024-11-29 03:06:28.155687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:12.175 [2024-11-29 03:06:28.155695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:12.175 [2024-11-29 03:06:28.155704] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:12.175 [2024-11-29 03:06:28.155713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.175 [2024-11-29 03:06:28.155724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:12.175 [2024-11-29 03:06:28.155732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:12.175 [2024-11-29 03:06:28.155740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:12.175 [2024-11-29 03:06:28.155747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:12.175 [2024-11-29 03:06:28.155754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:12.175 [2024-11-29 03:06:28.155761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:12.175 [2024-11-29 03:06:28.155768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:12.175 [2024-11-29 03:06:28.155781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:12.175 [2024-11-29 03:06:28.155788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:12.175 [2024-11-29 03:06:28.155797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:12.175 [2024-11-29 03:06:28.155805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:12.175 [2024-11-29 03:06:28.155811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:12.175 [2024-11-29 03:06:28.155819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:12.175 [2024-11-29 03:06:28.155844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:12.175 [2024-11-29 03:06:28.155852] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:12.175 [2024-11-29 03:06:28.155863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.175 [2024-11-29 03:06:28.155873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:12.175 [2024-11-29 03:06:28.155881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:12.175 [2024-11-29 03:06:28.155889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:12.175 [2024-11-29 03:06:28.155896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:12.175 [2024-11-29 03:06:28.155903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.175 [2024-11-29 03:06:28.155911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:12.175 [2024-11-29 03:06:28.155919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:19:12.175 [2024-11-29 03:06:28.155930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.170658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.170709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.439 [2024-11-29 03:06:28.170723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.674 ms 00:19:12.439 [2024-11-29 03:06:28.170732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.170898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.170917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:12.439 [2024-11-29 03:06:28.170931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:12.439 [2024-11-29 03:06:28.170939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.191235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.191281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.439 [2024-11-29 03:06:28.191293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.258 ms 00:19:12.439 [2024-11-29 03:06:28.191301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.191387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.191398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.439 [2024-11-29 03:06:28.191407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:12.439 [2024-11-29 03:06:28.191415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.191771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.191787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.439 [2024-11-29 03:06:28.191796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:19:12.439 [2024-11-29 03:06:28.191803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.191974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.191990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.439 [2024-11-29 03:06:28.192006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:19:12.439 [2024-11-29 03:06:28.192015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.198325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.198363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.439 [2024-11-29 03:06:28.198378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.288 ms 00:19:12.439 [2024-11-29 03:06:28.198387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.201509] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:12.439 [2024-11-29 03:06:28.201673] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:12.439 [2024-11-29 03:06:28.201692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.201702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:12.439 [2024-11-29 03:06:28.201712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.202 ms 00:19:12.439 [2024-11-29 03:06:28.201721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.216764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.216903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:12.439 [2024-11-29 03:06:28.216920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.977 ms 00:19:12.439 [2024-11-29 03:06:28.216928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.218761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.218796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:12.439 [2024-11-29 03:06:28.218805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.763 ms 00:19:12.439 [2024-11-29 03:06:28.218812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.220802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.220923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:12.439 [2024-11-29 03:06:28.220938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.932 ms 00:19:12.439 [2024-11-29 03:06:28.220945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.221271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.221283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:12.439 [2024-11-29 03:06:28.221292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:19:12.439 [2024-11-29 03:06:28.221299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.236972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.237016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:12.439 [2024-11-29 03:06:28.237033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.650 ms 00:19:12.439 [2024-11-29 03:06:28.237041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.244506] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:12.439 [2024-11-29 03:06:28.258555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.258597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:12.439 [2024-11-29 03:06:28.258609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.453 ms 00:19:12.439 [2024-11-29 03:06:28.258617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.258684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.258694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:12.439 [2024-11-29 03:06:28.258705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:12.439 [2024-11-29 03:06:28.258713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.258756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.258764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:12.439 [2024-11-29 03:06:28.258772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:12.439 [2024-11-29 03:06:28.258780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.258804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.258813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:12.439 [2024-11-29 03:06:28.258820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:12.439 [2024-11-29 03:06:28.258854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.258888] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:12.439 [2024-11-29 03:06:28.258898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.258906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:12.439 [2024-11-29 03:06:28.258913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:12.439 [2024-11-29 03:06:28.258921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.262983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.263016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:12.439 [2024-11-29 03:06:28.263026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.041 ms 00:19:12.439 [2024-11-29 03:06:28.263039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.263124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.439 [2024-11-29 03:06:28.263135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:12.439 [2024-11-29 03:06:28.263143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:12.439 [2024-11-29 03:06:28.263150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.439 [2024-11-29 03:06:28.263921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:12.439 [2024-11-29 03:06:28.264925] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.729 ms, result 0 00:19:12.439 [2024-11-29 03:06:28.266093] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:12.439 [2024-11-29 03:06:28.275483] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:12.703  [2024-11-29T03:06:28.695Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-11-29 03:06:28.641933] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:12.703 [2024-11-29 03:06:28.642656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.703 [2024-11-29 03:06:28.642690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:12.703 [2024-11-29 03:06:28.642705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:12.703 [2024-11-29 03:06:28.642712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.703 [2024-11-29 03:06:28.642732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:12.703 [2024-11-29 03:06:28.643182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.703 [2024-11-29 03:06:28.643202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:12.703 [2024-11-29 03:06:28.643211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:19:12.703 [2024-11-29 03:06:28.643218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.703 [2024-11-29 03:06:28.646878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.703 [2024-11-29 03:06:28.646990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:12.703 [2024-11-29 03:06:28.647039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.629 ms 00:19:12.703 [2024-11-29 03:06:28.647063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.703 [2024-11-29 03:06:28.653510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.703 [2024-11-29 03:06:28.653688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:12.703 [2024-11-29 03:06:28.653705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.399 ms 00:19:12.703 [2024-11-29 03:06:28.653712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.703 [2024-11-29 03:06:28.660812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.703 [2024-11-29 03:06:28.660917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:12.703 [2024-11-29 03:06:28.660975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.062 ms 00:19:12.703 [2024-11-29 03:06:28.661009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.703 [2024-11-29 03:06:28.663353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.703 [2024-11-29 03:06:28.663462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:12.703 [2024-11-29 03:06:28.663513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.282 ms 00:19:12.703 [2024-11-29 03:06:28.663535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.703 [2024-11-29 03:06:28.667865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.703 [2024-11-29 03:06:28.667972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:12.703 [2024-11-29 03:06:28.668023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.288 ms 00:19:12.703 [2024-11-29 03:06:28.668045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.703 [2024-11-29 03:06:28.668175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.703 [2024-11-29 03:06:28.668201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:12.703 [2024-11-29 03:06:28.668229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:12.703 [2024-11-29 03:06:28.668248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.703 [2024-11-29 03:06:28.671110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.703 [2024-11-29 03:06:28.671245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:12.703 [2024-11-29 03:06:28.671299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.830 ms 00:19:12.703 [2024-11-29 03:06:28.671322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.703 [2024-11-29 03:06:28.673713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.703 [2024-11-29 03:06:28.673822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:12.703 [2024-11-29 03:06:28.673882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.298 ms 00:19:12.703 [2024-11-29 03:06:28.673904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.703 [2024-11-29 03:06:28.675798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.703 [2024-11-29 03:06:28.675942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:12.703 [2024-11-29 03:06:28.675992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.803 ms 00:19:12.703 [2024-11-29 03:06:28.676019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.703 [2024-11-29 03:06:28.677437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.704 [2024-11-29 03:06:28.677546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:12.704 [2024-11-29 03:06:28.677594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.346 ms 00:19:12.704 [2024-11-29 03:06:28.677616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.704 [2024-11-29 03:06:28.678174] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:12.704 [2024-11-29 03:06:28.678227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.678984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.679012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.679042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.679073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.679101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.679168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.679198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.679258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.679291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.679319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.679371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.679524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.679860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:12.704 [2024-11-29 03:06:28.680691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:12.705 [2024-11-29 03:06:28.680698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:12.705 [2024-11-29 03:06:28.680706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:12.705 [2024-11-29 03:06:28.680713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:12.705 [2024-11-29 03:06:28.680722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:12.705 [2024-11-29 03:06:28.680730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:12.705 [2024-11-29 03:06:28.680738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:12.705 [2024-11-29 03:06:28.680745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:12.705 [2024-11-29 03:06:28.680753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:12.705 [2024-11-29 03:06:28.680760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:12.705 [2024-11-29 03:06:28.680768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:12.705 [2024-11-29 03:06:28.680776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:12.705 [2024-11-29 03:06:28.680786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:12.705 [2024-11-29 03:06:28.680793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:12.705 [2024-11-29 03:06:28.680801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:12.705 [2024-11-29 03:06:28.680817] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:12.705 [2024-11-29 03:06:28.680840] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d72ba7a5-0ce0-4bc6-a145-7b09d187e338 00:19:12.705 [2024-11-29 03:06:28.680849] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:12.705 [2024-11-29 03:06:28.680857] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:12.705 [2024-11-29 03:06:28.680865] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:12.705 [2024-11-29 03:06:28.680880] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:12.705 [2024-11-29 03:06:28.680888] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:12.705 [2024-11-29 03:06:28.680901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:12.705 [2024-11-29 03:06:28.680909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:12.705 [2024-11-29 03:06:28.680915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:12.705 [2024-11-29 03:06:28.680922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:12.705 [2024-11-29 03:06:28.680932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.705 [2024-11-29 03:06:28.680940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:12.705 [2024-11-29 03:06:28.680949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.760 ms 00:19:12.705 [2024-11-29 03:06:28.680957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.705 [2024-11-29 03:06:28.682775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.705 [2024-11-29 03:06:28.682794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:12.705 [2024-11-29 03:06:28.682809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.789 ms 00:19:12.705 [2024-11-29 03:06:28.682820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.705 [2024-11-29 03:06:28.682932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.705 [2024-11-29 03:06:28.682942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:12.705 [2024-11-29 03:06:28.682951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:12.705 [2024-11-29 03:06:28.682959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.705 [2024-11-29 03:06:28.690153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.705 [2024-11-29 03:06:28.690265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.705 [2024-11-29 03:06:28.690324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.705 [2024-11-29 03:06:28.690347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.705 [2024-11-29 03:06:28.690420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.705 [2024-11-29 03:06:28.690443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.705 [2024-11-29 03:06:28.690462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.705 [2024-11-29 03:06:28.690481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.705 [2024-11-29 03:06:28.690539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.705 [2024-11-29 03:06:28.690594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.705 [2024-11-29 03:06:28.690614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.705 [2024-11-29 03:06:28.690633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.705 [2024-11-29 03:06:28.690667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.705 [2024-11-29 03:06:28.690688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.705 [2024-11-29 03:06:28.690707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.705 [2024-11-29 03:06:28.690770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.967 [2024-11-29 03:06:28.703544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.967 [2024-11-29 03:06:28.703674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.967 [2024-11-29 03:06:28.703723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.967 [2024-11-29 03:06:28.703751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.967 [2024-11-29 03:06:28.714053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.967 [2024-11-29 03:06:28.714184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.967 [2024-11-29 03:06:28.714232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.967 [2024-11-29 03:06:28.714254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.967 [2024-11-29 03:06:28.714311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.967 [2024-11-29 03:06:28.714334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.967 [2024-11-29 03:06:28.714355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.967 [2024-11-29 03:06:28.714375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.967 [2024-11-29 03:06:28.714421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.967 [2024-11-29 03:06:28.714442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.967 [2024-11-29 03:06:28.714463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.967 [2024-11-29 03:06:28.714575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.967 [2024-11-29 03:06:28.714680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.967 [2024-11-29 03:06:28.714707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.967 [2024-11-29 03:06:28.714736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.967 [2024-11-29 03:06:28.714756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.967 [2024-11-29 03:06:28.714805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.967 [2024-11-29 03:06:28.714855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:12.967 [2024-11-29 03:06:28.714877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.967 [2024-11-29 03:06:28.714950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.967 [2024-11-29 03:06:28.715015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.967 [2024-11-29 03:06:28.715040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.967 [2024-11-29 03:06:28.715062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.967 [2024-11-29 03:06:28.715080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.967 [2024-11-29 03:06:28.715150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.967 [2024-11-29 03:06:28.715176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.967 [2024-11-29 03:06:28.715196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.967 [2024-11-29 03:06:28.715221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.967 [2024-11-29 03:06:28.715431] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.740 ms, result 0 00:19:12.967 00:19:12.967 00:19:12.967 03:06:28 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=87821 00:19:12.967 03:06:28 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 87821 00:19:12.967 03:06:28 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87821 ']' 00:19:12.967 03:06:28 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.967 03:06:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.967 03:06:28 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.967 03:06:28 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.967 03:06:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:12.967 03:06:28 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:13.228 [2024-11-29 03:06:28.992700] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:13.228 [2024-11-29 03:06:28.993250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87821 ] 00:19:13.228 [2024-11-29 03:06:29.137625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.228 [2024-11-29 03:06:29.170056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.171 03:06:29 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.171 03:06:29 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:14.171 03:06:29 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:14.171 [2024-11-29 03:06:30.053943] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:14.171 [2024-11-29 03:06:30.054046] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:14.433 [2024-11-29 03:06:30.235603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.433 [2024-11-29 03:06:30.235680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:14.433 [2024-11-29 03:06:30.235700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:14.433 [2024-11-29 03:06:30.235712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.433 [2024-11-29 03:06:30.238569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.433 [2024-11-29 03:06:30.238922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.433 [2024-11-29 03:06:30.238947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.830 ms 00:19:14.433 [2024-11-29 03:06:30.238959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.433 [2024-11-29 03:06:30.239092] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:14.433 [2024-11-29 03:06:30.239419] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:14.433 [2024-11-29 03:06:30.239438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.433 [2024-11-29 03:06:30.239451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.433 [2024-11-29 03:06:30.239462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:19:14.433 [2024-11-29 03:06:30.239475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.433 [2024-11-29 03:06:30.241936] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:14.433 [2024-11-29 03:06:30.247003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.433 [2024-11-29 03:06:30.247063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:14.433 [2024-11-29 03:06:30.247077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.064 ms 00:19:14.433 [2024-11-29 03:06:30.247086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.433 [2024-11-29 03:06:30.247178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.433 [2024-11-29 03:06:30.247189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:14.433 [2024-11-29 03:06:30.247204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:14.433 [2024-11-29 03:06:30.247212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.433 [2024-11-29 03:06:30.258947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.433 [2024-11-29 03:06:30.258991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.433 [2024-11-29 03:06:30.259006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.671 ms 00:19:14.433 [2024-11-29 03:06:30.259019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.433 [2024-11-29 03:06:30.259165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.433 [2024-11-29 03:06:30.259178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.433 [2024-11-29 03:06:30.259197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:14.433 [2024-11-29 03:06:30.259207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.433 [2024-11-29 03:06:30.259238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.433 [2024-11-29 03:06:30.259250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:14.433 [2024-11-29 03:06:30.259260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:14.433 [2024-11-29 03:06:30.259268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.433 [2024-11-29 03:06:30.259297] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:14.433 [2024-11-29 03:06:30.262052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.433 [2024-11-29 03:06:30.262102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.433 [2024-11-29 03:06:30.262116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.764 ms 00:19:14.433 [2024-11-29 03:06:30.262133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.433 [2024-11-29 03:06:30.262178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.433 [2024-11-29 03:06:30.262189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:14.433 [2024-11-29 03:06:30.262198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:14.433 [2024-11-29 03:06:30.262209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.433 [2024-11-29 03:06:30.262231] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:14.433 [2024-11-29 03:06:30.262258] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:14.433 [2024-11-29 03:06:30.262301] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:14.433 [2024-11-29 03:06:30.262325] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:14.433 [2024-11-29 03:06:30.262439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:14.433 [2024-11-29 03:06:30.262455] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:14.433 [2024-11-29 03:06:30.262468] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:14.433 [2024-11-29 03:06:30.262484] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:14.434 [2024-11-29 03:06:30.262498] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:14.434 [2024-11-29 03:06:30.262516] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:14.434 [2024-11-29 03:06:30.262526] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:14.434 [2024-11-29 03:06:30.262540] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:14.434 [2024-11-29 03:06:30.262549] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:14.434 [2024-11-29 03:06:30.262561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-11-29 03:06:30.262570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:14.434 [2024-11-29 03:06:30.262581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:19:14.434 [2024-11-29 03:06:30.262593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-11-29 03:06:30.262684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-11-29 03:06:30.262693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:14.434 [2024-11-29 03:06:30.262704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:14.434 [2024-11-29 03:06:30.262713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-11-29 03:06:30.262821] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:14.434 [2024-11-29 03:06:30.262858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:14.434 [2024-11-29 03:06:30.262871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.434 [2024-11-29 03:06:30.262887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.434 [2024-11-29 03:06:30.262904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:14.434 [2024-11-29 03:06:30.262913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:14.434 [2024-11-29 03:06:30.262924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:14.434 [2024-11-29 03:06:30.262933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:14.434 [2024-11-29 03:06:30.262945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:14.434 [2024-11-29 03:06:30.262954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.434 [2024-11-29 03:06:30.262966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:14.434 [2024-11-29 03:06:30.262976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:14.434 [2024-11-29 03:06:30.262989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.434 [2024-11-29 03:06:30.262997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:14.434 [2024-11-29 03:06:30.263010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:14.434 [2024-11-29 03:06:30.263020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.434 [2024-11-29 03:06:30.263033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:14.434 [2024-11-29 03:06:30.263042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:14.434 [2024-11-29 03:06:30.263052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.434 [2024-11-29 03:06:30.263060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:14.434 [2024-11-29 03:06:30.263074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:14.434 [2024-11-29 03:06:30.263080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.434 [2024-11-29 03:06:30.263089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:14.434 [2024-11-29 03:06:30.263096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:14.434 [2024-11-29 03:06:30.263105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.434 [2024-11-29 03:06:30.263111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:14.434 [2024-11-29 03:06:30.263120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:14.434 [2024-11-29 03:06:30.263127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.434 [2024-11-29 03:06:30.263137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:14.434 [2024-11-29 03:06:30.263144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:14.434 [2024-11-29 03:06:30.263154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.434 [2024-11-29 03:06:30.263162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:14.434 [2024-11-29 03:06:30.263171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:14.434 [2024-11-29 03:06:30.263178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.434 [2024-11-29 03:06:30.263186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:14.434 [2024-11-29 03:06:30.263194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:14.434 [2024-11-29 03:06:30.263205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.434 [2024-11-29 03:06:30.263212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:14.434 [2024-11-29 03:06:30.263220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:14.434 [2024-11-29 03:06:30.263227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.434 [2024-11-29 03:06:30.263236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:14.434 [2024-11-29 03:06:30.263243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:14.434 [2024-11-29 03:06:30.263252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.434 [2024-11-29 03:06:30.263258] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:14.434 [2024-11-29 03:06:30.263272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:14.434 [2024-11-29 03:06:30.263279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.434 [2024-11-29 03:06:30.263288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.434 [2024-11-29 03:06:30.263297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:14.434 [2024-11-29 03:06:30.263308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:14.434 [2024-11-29 03:06:30.263315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:14.434 [2024-11-29 03:06:30.263325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:14.434 [2024-11-29 03:06:30.263331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:14.434 [2024-11-29 03:06:30.263342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:14.434 [2024-11-29 03:06:30.263352] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:14.434 [2024-11-29 03:06:30.263366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.434 [2024-11-29 03:06:30.263377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:14.434 [2024-11-29 03:06:30.263388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:14.434 [2024-11-29 03:06:30.263395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:14.434 [2024-11-29 03:06:30.263405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:14.434 [2024-11-29 03:06:30.263413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:14.434 [2024-11-29 03:06:30.263422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:14.434 [2024-11-29 03:06:30.263429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:14.434 [2024-11-29 03:06:30.263438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:14.434 [2024-11-29 03:06:30.263445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:14.434 [2024-11-29 03:06:30.263455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:14.434 [2024-11-29 03:06:30.263462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:14.434 [2024-11-29 03:06:30.263480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:14.434 [2024-11-29 03:06:30.263487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:14.434 [2024-11-29 03:06:30.263499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:14.434 [2024-11-29 03:06:30.263508] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:14.434 [2024-11-29 03:06:30.263518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.434 [2024-11-29 03:06:30.263531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:14.434 [2024-11-29 03:06:30.263542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:14.434 [2024-11-29 03:06:30.263552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:14.434 [2024-11-29 03:06:30.263562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:14.434 [2024-11-29 03:06:30.263570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-11-29 03:06:30.263583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:14.434 [2024-11-29 03:06:30.263592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:19:14.434 [2024-11-29 03:06:30.263602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-11-29 03:06:30.284419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-11-29 03:06:30.284478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.434 [2024-11-29 03:06:30.284492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.731 ms 00:19:14.434 [2024-11-29 03:06:30.284504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.434 [2024-11-29 03:06:30.284651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.434 [2024-11-29 03:06:30.284674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:14.434 [2024-11-29 03:06:30.284684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:14.435 [2024-11-29 03:06:30.284694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.302263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.302320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.435 [2024-11-29 03:06:30.302333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.544 ms 00:19:14.435 [2024-11-29 03:06:30.302349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.302434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.302448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.435 [2024-11-29 03:06:30.302459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:14.435 [2024-11-29 03:06:30.302472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.303218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.303266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.435 [2024-11-29 03:06:30.303278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:19:14.435 [2024-11-29 03:06:30.303290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.303478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.303492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.435 [2024-11-29 03:06:30.303502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:19:14.435 [2024-11-29 03:06:30.303512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.315516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.315573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.435 [2024-11-29 03:06:30.315586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.977 ms 00:19:14.435 [2024-11-29 03:06:30.315597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.331544] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:14.435 [2024-11-29 03:06:30.331601] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:14.435 [2024-11-29 03:06:30.331619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.331634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:14.435 [2024-11-29 03:06:30.331648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.886 ms 00:19:14.435 [2024-11-29 03:06:30.331661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.347922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.347962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:14.435 [2024-11-29 03:06:30.347973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.204 ms 00:19:14.435 [2024-11-29 03:06:30.347985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.350254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.350291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:14.435 [2024-11-29 03:06:30.350300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.191 ms 00:19:14.435 [2024-11-29 03:06:30.350310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.352158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.352192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:14.435 [2024-11-29 03:06:30.352201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.809 ms 00:19:14.435 [2024-11-29 03:06:30.352211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.352552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.352567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:14.435 [2024-11-29 03:06:30.352576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:19:14.435 [2024-11-29 03:06:30.352585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.373249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.373290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:14.435 [2024-11-29 03:06:30.373301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.644 ms 00:19:14.435 [2024-11-29 03:06:30.373314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.381302] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:14.435 [2024-11-29 03:06:30.398729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.398768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:14.435 [2024-11-29 03:06:30.398786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.336 ms 00:19:14.435 [2024-11-29 03:06:30.398794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.398904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.398915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:14.435 [2024-11-29 03:06:30.398930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:14.435 [2024-11-29 03:06:30.398938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.398998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.399013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:14.435 [2024-11-29 03:06:30.399024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:14.435 [2024-11-29 03:06:30.399031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.399058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.399068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:14.435 [2024-11-29 03:06:30.399080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:14.435 [2024-11-29 03:06:30.399088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.399124] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:14.435 [2024-11-29 03:06:30.399134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.399144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:14.435 [2024-11-29 03:06:30.399155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:14.435 [2024-11-29 03:06:30.399165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.403968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.404132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:14.435 [2024-11-29 03:06:30.404151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.780 ms 00:19:14.435 [2024-11-29 03:06:30.404161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.404236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.435 [2024-11-29 03:06:30.404252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:14.435 [2024-11-29 03:06:30.404261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:14.435 [2024-11-29 03:06:30.404270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.435 [2024-11-29 03:06:30.405192] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:14.435 [2024-11-29 03:06:30.406285] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 169.300 ms, result 0 00:19:14.435 [2024-11-29 03:06:30.408094] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:14.697 Some configs were skipped because the RPC state that can call them passed over. 00:19:14.697 03:06:30 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:14.697 [2024-11-29 03:06:30.638461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.697 [2024-11-29 03:06:30.638671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:14.697 [2024-11-29 03:06:30.638747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.134 ms 00:19:14.697 [2024-11-29 03:06:30.638771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.697 [2024-11-29 03:06:30.638851] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.511 ms, result 0 00:19:14.697 true 00:19:14.697 03:06:30 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:14.959 [2024-11-29 03:06:30.857897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.959 [2024-11-29 03:06:30.858100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:14.959 [2024-11-29 03:06:30.858170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.340 ms 00:19:14.959 [2024-11-29 03:06:30.858196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.959 [2024-11-29 03:06:30.858256] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.695 ms, result 0 00:19:14.959 true 00:19:14.959 03:06:30 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 87821 00:19:14.959 03:06:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87821 ']' 00:19:14.959 03:06:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87821 00:19:14.959 03:06:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:14.959 03:06:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.959 03:06:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87821 00:19:14.959 killing process with pid 87821 00:19:14.959 03:06:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.959 03:06:30 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.959 03:06:30 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87821' 00:19:14.959 03:06:30 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87821 00:19:14.959 03:06:30 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87821 00:19:15.221 [2024-11-29 03:06:31.096298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.221 [2024-11-29 03:06:31.096368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:15.221 [2024-11-29 03:06:31.096386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:15.221 [2024-11-29 03:06:31.096395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.221 [2024-11-29 03:06:31.096443] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:15.221 [2024-11-29 03:06:31.097267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.221 [2024-11-29 03:06:31.097302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:15.221 [2024-11-29 03:06:31.097314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:19:15.221 [2024-11-29 03:06:31.097327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.221 [2024-11-29 03:06:31.097651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.221 [2024-11-29 03:06:31.097665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:15.221 [2024-11-29 03:06:31.097675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:19:15.221 [2024-11-29 03:06:31.097686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.221 [2024-11-29 03:06:31.102406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.222 [2024-11-29 03:06:31.102452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:15.222 [2024-11-29 03:06:31.102467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.700 ms 00:19:15.222 [2024-11-29 03:06:31.102478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.222 [2024-11-29 03:06:31.109484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.222 [2024-11-29 03:06:31.109560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:15.222 [2024-11-29 03:06:31.109572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.962 ms 00:19:15.222 [2024-11-29 03:06:31.109588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.222 [2024-11-29 03:06:31.112277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.222 [2024-11-29 03:06:31.112330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:15.222 [2024-11-29 03:06:31.112341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.616 ms 00:19:15.222 [2024-11-29 03:06:31.112351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.222 [2024-11-29 03:06:31.117780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.222 [2024-11-29 03:06:31.118048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:15.222 [2024-11-29 03:06:31.118069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.363 ms 00:19:15.222 [2024-11-29 03:06:31.118080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.222 [2024-11-29 03:06:31.118260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.222 [2024-11-29 03:06:31.118282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:15.222 [2024-11-29 03:06:31.118292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:15.222 [2024-11-29 03:06:31.118306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.222 [2024-11-29 03:06:31.121728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.222 [2024-11-29 03:06:31.121915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:15.222 [2024-11-29 03:06:31.121933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.403 ms 00:19:15.222 [2024-11-29 03:06:31.121947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.222 [2024-11-29 03:06:31.124878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.222 [2024-11-29 03:06:31.124930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:15.222 [2024-11-29 03:06:31.124940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.887 ms 00:19:15.222 [2024-11-29 03:06:31.124950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.222 [2024-11-29 03:06:31.127332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.222 [2024-11-29 03:06:31.127386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:15.222 [2024-11-29 03:06:31.127395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.336 ms 00:19:15.222 [2024-11-29 03:06:31.127406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.222 [2024-11-29 03:06:31.129521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.222 [2024-11-29 03:06:31.129602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:15.222 [2024-11-29 03:06:31.129612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.037 ms 00:19:15.222 [2024-11-29 03:06:31.129622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.222 [2024-11-29 03:06:31.129667] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:15.222 [2024-11-29 03:06:31.129687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.129990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.130000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.130010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.130019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.130031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.130039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.130053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.130069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.130079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.130087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.130099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:15.222 [2024-11-29 03:06:31.130107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:15.223 [2024-11-29 03:06:31.130690] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:15.223 [2024-11-29 03:06:31.130710] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d72ba7a5-0ce0-4bc6-a145-7b09d187e338 00:19:15.223 [2024-11-29 03:06:31.130722] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:15.223 [2024-11-29 03:06:31.130730] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:15.223 [2024-11-29 03:06:31.130741] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:15.223 [2024-11-29 03:06:31.130753] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:15.223 [2024-11-29 03:06:31.130766] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:15.223 [2024-11-29 03:06:31.130774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:15.223 [2024-11-29 03:06:31.130785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:15.223 [2024-11-29 03:06:31.130793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:15.224 [2024-11-29 03:06:31.130804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:15.224 [2024-11-29 03:06:31.130812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.224 [2024-11-29 03:06:31.130823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:15.224 [2024-11-29 03:06:31.130858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:19:15.224 [2024-11-29 03:06:31.130871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.224 [2024-11-29 03:06:31.133473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.224 [2024-11-29 03:06:31.133510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:15.224 [2024-11-29 03:06:31.133522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:19:15.224 [2024-11-29 03:06:31.133553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.224 [2024-11-29 03:06:31.133713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.224 [2024-11-29 03:06:31.133726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:15.224 [2024-11-29 03:06:31.133736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:19:15.224 [2024-11-29 03:06:31.133749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.224 [2024-11-29 03:06:31.144087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.224 [2024-11-29 03:06:31.144282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:15.224 [2024-11-29 03:06:31.144300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.224 [2024-11-29 03:06:31.144312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.224 [2024-11-29 03:06:31.144419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.224 [2024-11-29 03:06:31.144434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:15.224 [2024-11-29 03:06:31.144444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.224 [2024-11-29 03:06:31.144460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.224 [2024-11-29 03:06:31.144512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.224 [2024-11-29 03:06:31.144524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:15.224 [2024-11-29 03:06:31.144532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.224 [2024-11-29 03:06:31.144542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.224 [2024-11-29 03:06:31.144563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.224 [2024-11-29 03:06:31.144575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:15.224 [2024-11-29 03:06:31.144584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.224 [2024-11-29 03:06:31.144595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.224 [2024-11-29 03:06:31.163589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.224 [2024-11-29 03:06:31.163805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:15.224 [2024-11-29 03:06:31.163845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.224 [2024-11-29 03:06:31.163866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.224 [2024-11-29 03:06:31.178354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.224 [2024-11-29 03:06:31.178560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:15.224 [2024-11-29 03:06:31.178579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.224 [2024-11-29 03:06:31.178597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.224 [2024-11-29 03:06:31.178681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.224 [2024-11-29 03:06:31.178701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:15.224 [2024-11-29 03:06:31.178710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.224 [2024-11-29 03:06:31.178721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.224 [2024-11-29 03:06:31.178760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.224 [2024-11-29 03:06:31.178772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:15.224 [2024-11-29 03:06:31.178781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.224 [2024-11-29 03:06:31.178791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.224 [2024-11-29 03:06:31.178903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.224 [2024-11-29 03:06:31.178917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:15.224 [2024-11-29 03:06:31.178926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.224 [2024-11-29 03:06:31.178937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.224 [2024-11-29 03:06:31.178983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.224 [2024-11-29 03:06:31.178997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:15.224 [2024-11-29 03:06:31.179008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.224 [2024-11-29 03:06:31.179022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.224 [2024-11-29 03:06:31.179076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.224 [2024-11-29 03:06:31.179094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:15.224 [2024-11-29 03:06:31.179104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.224 [2024-11-29 03:06:31.179115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.224 [2024-11-29 03:06:31.179177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.224 [2024-11-29 03:06:31.179192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:15.224 [2024-11-29 03:06:31.179205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.224 [2024-11-29 03:06:31.179217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.224 [2024-11-29 03:06:31.179395] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 83.063 ms, result 0 00:19:15.485 03:06:31 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:15.485 [2024-11-29 03:06:31.468487] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:15.485 [2024-11-29 03:06:31.468638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87862 ] 00:19:15.745 [2024-11-29 03:06:31.615774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.745 [2024-11-29 03:06:31.645970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.007 [2024-11-29 03:06:31.763970] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:16.007 [2024-11-29 03:06:31.764053] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:16.007 [2024-11-29 03:06:31.926417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.007 [2024-11-29 03:06:31.926479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:16.007 [2024-11-29 03:06:31.926494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:16.007 [2024-11-29 03:06:31.926503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.007 [2024-11-29 03:06:31.929157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.007 [2024-11-29 03:06:31.929214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:16.007 [2024-11-29 03:06:31.929229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:19:16.007 [2024-11-29 03:06:31.929237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.007 [2024-11-29 03:06:31.929361] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:16.007 [2024-11-29 03:06:31.929645] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:16.007 [2024-11-29 03:06:31.929665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.007 [2024-11-29 03:06:31.929674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:16.007 [2024-11-29 03:06:31.929683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:19:16.007 [2024-11-29 03:06:31.929695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.007 [2024-11-29 03:06:31.931680] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:16.007 [2024-11-29 03:06:31.935583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.007 [2024-11-29 03:06:31.935638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:16.007 [2024-11-29 03:06:31.935656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.905 ms 00:19:16.007 [2024-11-29 03:06:31.935669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.007 [2024-11-29 03:06:31.935754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.007 [2024-11-29 03:06:31.935764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:16.007 [2024-11-29 03:06:31.935773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:16.007 [2024-11-29 03:06:31.935781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.007 [2024-11-29 03:06:31.944127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.007 [2024-11-29 03:06:31.944173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:16.007 [2024-11-29 03:06:31.944184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.299 ms 00:19:16.007 [2024-11-29 03:06:31.944201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.007 [2024-11-29 03:06:31.944348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.007 [2024-11-29 03:06:31.944360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:16.007 [2024-11-29 03:06:31.944370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:16.007 [2024-11-29 03:06:31.944381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.007 [2024-11-29 03:06:31.944409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.007 [2024-11-29 03:06:31.944418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:16.008 [2024-11-29 03:06:31.944426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:16.008 [2024-11-29 03:06:31.944434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.008 [2024-11-29 03:06:31.944457] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:16.008 [2024-11-29 03:06:31.946619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.008 [2024-11-29 03:06:31.946661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:16.008 [2024-11-29 03:06:31.946672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:19:16.008 [2024-11-29 03:06:31.946685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.008 [2024-11-29 03:06:31.946734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.008 [2024-11-29 03:06:31.946743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:16.008 [2024-11-29 03:06:31.946752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:16.008 [2024-11-29 03:06:31.946759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.008 [2024-11-29 03:06:31.946778] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:16.008 [2024-11-29 03:06:31.946798] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:16.008 [2024-11-29 03:06:31.946867] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:16.008 [2024-11-29 03:06:31.946890] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:16.008 [2024-11-29 03:06:31.946996] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:16.008 [2024-11-29 03:06:31.947011] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:16.008 [2024-11-29 03:06:31.947022] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:16.008 [2024-11-29 03:06:31.947033] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:16.008 [2024-11-29 03:06:31.947042] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:16.008 [2024-11-29 03:06:31.947051] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:16.008 [2024-11-29 03:06:31.947059] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:16.008 [2024-11-29 03:06:31.947071] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:16.008 [2024-11-29 03:06:31.947080] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:16.008 [2024-11-29 03:06:31.947091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.008 [2024-11-29 03:06:31.947098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:16.008 [2024-11-29 03:06:31.947106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:19:16.008 [2024-11-29 03:06:31.947118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.008 [2024-11-29 03:06:31.947207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.008 [2024-11-29 03:06:31.947216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:16.008 [2024-11-29 03:06:31.947223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:16.008 [2024-11-29 03:06:31.947230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.008 [2024-11-29 03:06:31.947329] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:16.008 [2024-11-29 03:06:31.947343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:16.008 [2024-11-29 03:06:31.947352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:16.008 [2024-11-29 03:06:31.947364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.008 [2024-11-29 03:06:31.947374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:16.008 [2024-11-29 03:06:31.947384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:16.008 [2024-11-29 03:06:31.947391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:16.008 [2024-11-29 03:06:31.947404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:16.008 [2024-11-29 03:06:31.947412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:16.008 [2024-11-29 03:06:31.947420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:16.008 [2024-11-29 03:06:31.947429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:16.008 [2024-11-29 03:06:31.947437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:16.008 [2024-11-29 03:06:31.947445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:16.008 [2024-11-29 03:06:31.947452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:16.008 [2024-11-29 03:06:31.947460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:16.008 [2024-11-29 03:06:31.947468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.008 [2024-11-29 03:06:31.947476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:16.008 [2024-11-29 03:06:31.947485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:16.008 [2024-11-29 03:06:31.947495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.008 [2024-11-29 03:06:31.947504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:16.008 [2024-11-29 03:06:31.947512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:16.008 [2024-11-29 03:06:31.947520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:16.008 [2024-11-29 03:06:31.947528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:16.008 [2024-11-29 03:06:31.947540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:16.008 [2024-11-29 03:06:31.947548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:16.008 [2024-11-29 03:06:31.947556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:16.008 [2024-11-29 03:06:31.947564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:16.008 [2024-11-29 03:06:31.947571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:16.008 [2024-11-29 03:06:31.947579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:16.008 [2024-11-29 03:06:31.947587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:16.008 [2024-11-29 03:06:31.947594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:16.008 [2024-11-29 03:06:31.947601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:16.008 [2024-11-29 03:06:31.947609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:16.008 [2024-11-29 03:06:31.947616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:16.008 [2024-11-29 03:06:31.947624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:16.008 [2024-11-29 03:06:31.947631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:16.008 [2024-11-29 03:06:31.947639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:16.008 [2024-11-29 03:06:31.947647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:16.008 [2024-11-29 03:06:31.947655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:16.008 [2024-11-29 03:06:31.947665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.008 [2024-11-29 03:06:31.947673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:16.008 [2024-11-29 03:06:31.947680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:16.008 [2024-11-29 03:06:31.947689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.008 [2024-11-29 03:06:31.947696] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:16.008 [2024-11-29 03:06:31.947706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:16.008 [2024-11-29 03:06:31.947715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:16.008 [2024-11-29 03:06:31.947723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.008 [2024-11-29 03:06:31.947732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:16.008 [2024-11-29 03:06:31.947739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:16.008 [2024-11-29 03:06:31.947746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:16.008 [2024-11-29 03:06:31.947756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:16.008 [2024-11-29 03:06:31.947763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:16.008 [2024-11-29 03:06:31.947770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:16.008 [2024-11-29 03:06:31.947779] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:16.008 [2024-11-29 03:06:31.947788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:16.008 [2024-11-29 03:06:31.947799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:16.008 [2024-11-29 03:06:31.947808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:16.008 [2024-11-29 03:06:31.947815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:16.008 [2024-11-29 03:06:31.947822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:16.008 [2024-11-29 03:06:31.947843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:16.008 [2024-11-29 03:06:31.947851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:16.008 [2024-11-29 03:06:31.947858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:16.008 [2024-11-29 03:06:31.947872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:16.008 [2024-11-29 03:06:31.947879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:16.008 [2024-11-29 03:06:31.947886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:16.008 [2024-11-29 03:06:31.947894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:16.008 [2024-11-29 03:06:31.947902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:16.008 [2024-11-29 03:06:31.947909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:16.009 [2024-11-29 03:06:31.947917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:16.009 [2024-11-29 03:06:31.947925] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:16.009 [2024-11-29 03:06:31.947936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:16.009 [2024-11-29 03:06:31.947946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:16.009 [2024-11-29 03:06:31.947954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:16.009 [2024-11-29 03:06:31.947961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:16.009 [2024-11-29 03:06:31.947968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:16.009 [2024-11-29 03:06:31.947976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.009 [2024-11-29 03:06:31.947983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:16.009 [2024-11-29 03:06:31.947991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:19:16.009 [2024-11-29 03:06:31.947997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.009 [2024-11-29 03:06:31.962770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.009 [2024-11-29 03:06:31.962820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:16.009 [2024-11-29 03:06:31.962851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.719 ms 00:19:16.009 [2024-11-29 03:06:31.962860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.009 [2024-11-29 03:06:31.963004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.009 [2024-11-29 03:06:31.963020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:16.009 [2024-11-29 03:06:31.963028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:16.009 [2024-11-29 03:06:31.963036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.009 [2024-11-29 03:06:31.984275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.009 [2024-11-29 03:06:31.984459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:16.009 [2024-11-29 03:06:31.984489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.218 ms 00:19:16.009 [2024-11-29 03:06:31.984499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.009 [2024-11-29 03:06:31.984602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.009 [2024-11-29 03:06:31.984617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:16.009 [2024-11-29 03:06:31.984628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:16.009 [2024-11-29 03:06:31.984637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.009 [2024-11-29 03:06:31.985040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.009 [2024-11-29 03:06:31.985061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:16.009 [2024-11-29 03:06:31.985073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:19:16.009 [2024-11-29 03:06:31.985084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.009 [2024-11-29 03:06:31.985253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.009 [2024-11-29 03:06:31.985268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:16.009 [2024-11-29 03:06:31.985279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:19:16.009 [2024-11-29 03:06:31.985290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.009 [2024-11-29 03:06:31.991867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.009 [2024-11-29 03:06:31.991900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:16.009 [2024-11-29 03:06:31.991921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.548 ms 00:19:16.009 [2024-11-29 03:06:31.991930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.009 [2024-11-29 03:06:31.994964] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:16.009 [2024-11-29 03:06:31.995084] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:16.009 [2024-11-29 03:06:31.995098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.009 [2024-11-29 03:06:31.995106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:16.009 [2024-11-29 03:06:31.995114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.066 ms 00:19:16.009 [2024-11-29 03:06:31.995120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.270 [2024-11-29 03:06:32.009738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.270 [2024-11-29 03:06:32.009773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:16.270 [2024-11-29 03:06:32.009783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.564 ms 00:19:16.270 [2024-11-29 03:06:32.009790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.270 [2024-11-29 03:06:32.011887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.270 [2024-11-29 03:06:32.011915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:16.270 [2024-11-29 03:06:32.011924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.019 ms 00:19:16.270 [2024-11-29 03:06:32.011930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.270 [2024-11-29 03:06:32.013850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.270 [2024-11-29 03:06:32.013880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:16.270 [2024-11-29 03:06:32.013889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.876 ms 00:19:16.270 [2024-11-29 03:06:32.013895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.270 [2024-11-29 03:06:32.014224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.270 [2024-11-29 03:06:32.014235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:16.270 [2024-11-29 03:06:32.014244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:19:16.270 [2024-11-29 03:06:32.014250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.270 [2024-11-29 03:06:32.030048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.270 [2024-11-29 03:06:32.030095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:16.270 [2024-11-29 03:06:32.030106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.777 ms 00:19:16.270 [2024-11-29 03:06:32.030114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.270 [2024-11-29 03:06:32.037571] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:16.270 [2024-11-29 03:06:32.052524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.270 [2024-11-29 03:06:32.052567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:16.270 [2024-11-29 03:06:32.052577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.350 ms 00:19:16.270 [2024-11-29 03:06:32.052585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.270 [2024-11-29 03:06:32.052656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.270 [2024-11-29 03:06:32.052665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:16.270 [2024-11-29 03:06:32.052677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:16.270 [2024-11-29 03:06:32.052684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.270 [2024-11-29 03:06:32.052728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.270 [2024-11-29 03:06:32.052736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:16.270 [2024-11-29 03:06:32.052743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:16.270 [2024-11-29 03:06:32.052750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.270 [2024-11-29 03:06:32.052773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.270 [2024-11-29 03:06:32.052781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:16.270 [2024-11-29 03:06:32.052789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:16.270 [2024-11-29 03:06:32.052799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.270 [2024-11-29 03:06:32.052851] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:16.270 [2024-11-29 03:06:32.052862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.270 [2024-11-29 03:06:32.052869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:16.270 [2024-11-29 03:06:32.052882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:16.270 [2024-11-29 03:06:32.052889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.270 [2024-11-29 03:06:32.056955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.270 [2024-11-29 03:06:32.056988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:16.270 [2024-11-29 03:06:32.056998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.044 ms 00:19:16.270 [2024-11-29 03:06:32.057006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.270 [2024-11-29 03:06:32.057096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.270 [2024-11-29 03:06:32.057106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:16.270 [2024-11-29 03:06:32.057119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:16.270 [2024-11-29 03:06:32.057126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.270 [2024-11-29 03:06:32.057916] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:16.270 [2024-11-29 03:06:32.058997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 131.234 ms, result 0 00:19:16.270 [2024-11-29 03:06:32.060486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:16.270 [2024-11-29 03:06:32.069614] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:17.215  [2024-11-29T03:06:34.158Z] Copying: 14/256 [MB] (14 MBps) [2024-11-29T03:06:35.544Z] Copying: 24/256 [MB] (10 MBps) [2024-11-29T03:06:36.114Z] Copying: 35/256 [MB] (10 MBps) [2024-11-29T03:06:37.525Z] Copying: 52/256 [MB] (16 MBps) [2024-11-29T03:06:38.471Z] Copying: 62/256 [MB] (10 MBps) [2024-11-29T03:06:39.414Z] Copying: 73/256 [MB] (10 MBps) [2024-11-29T03:06:40.413Z] Copying: 85/256 [MB] (11 MBps) [2024-11-29T03:06:41.357Z] Copying: 96/256 [MB] (11 MBps) [2024-11-29T03:06:42.300Z] Copying: 109/256 [MB] (13 MBps) [2024-11-29T03:06:43.247Z] Copying: 130/256 [MB] (20 MBps) [2024-11-29T03:06:44.192Z] Copying: 148/256 [MB] (18 MBps) [2024-11-29T03:06:45.138Z] Copying: 169/256 [MB] (20 MBps) [2024-11-29T03:06:46.526Z] Copying: 189/256 [MB] (20 MBps) [2024-11-29T03:06:47.466Z] Copying: 205/256 [MB] (16 MBps) [2024-11-29T03:06:48.409Z] Copying: 225/256 [MB] (20 MBps) [2024-11-29T03:06:48.983Z] Copying: 247/256 [MB] (21 MBps) [2024-11-29T03:06:49.244Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-29 03:06:49.048865] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:33.252 [2024-11-29 03:06:49.050932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.252 [2024-11-29 03:06:49.050985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:33.252 [2024-11-29 03:06:49.051000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:33.252 [2024-11-29 03:06:49.051011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.051037] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:33.252 [2024-11-29 03:06:49.051721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.252 [2024-11-29 03:06:49.051766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:33.252 [2024-11-29 03:06:49.051779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:19:33.252 [2024-11-29 03:06:49.051791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.052115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.252 [2024-11-29 03:06:49.052136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:33.252 [2024-11-29 03:06:49.052150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:19:33.252 [2024-11-29 03:06:49.052159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.057414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.252 [2024-11-29 03:06:49.057446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:33.252 [2024-11-29 03:06:49.057458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.236 ms 00:19:33.252 [2024-11-29 03:06:49.057468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.064532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.252 [2024-11-29 03:06:49.064574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:33.252 [2024-11-29 03:06:49.064585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.037 ms 00:19:33.252 [2024-11-29 03:06:49.064601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.067729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.252 [2024-11-29 03:06:49.067796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:33.252 [2024-11-29 03:06:49.067809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.047 ms 00:19:33.252 [2024-11-29 03:06:49.067817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.072881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.252 [2024-11-29 03:06:49.072934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:33.252 [2024-11-29 03:06:49.072945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.994 ms 00:19:33.252 [2024-11-29 03:06:49.072954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.073096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.252 [2024-11-29 03:06:49.073107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:33.252 [2024-11-29 03:06:49.073124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:33.252 [2024-11-29 03:06:49.073133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.076061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.252 [2024-11-29 03:06:49.076116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:33.252 [2024-11-29 03:06:49.076127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.901 ms 00:19:33.252 [2024-11-29 03:06:49.076134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.079002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.252 [2024-11-29 03:06:49.079048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:33.252 [2024-11-29 03:06:49.079059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.817 ms 00:19:33.252 [2024-11-29 03:06:49.079066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.081360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.252 [2024-11-29 03:06:49.081409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:33.252 [2024-11-29 03:06:49.081420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.249 ms 00:19:33.252 [2024-11-29 03:06:49.081426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.083707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.252 [2024-11-29 03:06:49.083754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:33.252 [2024-11-29 03:06:49.083764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.199 ms 00:19:33.252 [2024-11-29 03:06:49.083771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.083816] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:33.252 [2024-11-29 03:06:49.083848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.083993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:33.252 [2024-11-29 03:06:49.084649] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:33.252 [2024-11-29 03:06:49.084659] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d72ba7a5-0ce0-4bc6-a145-7b09d187e338 00:19:33.252 [2024-11-29 03:06:49.084667] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:33.252 [2024-11-29 03:06:49.084675] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:33.252 [2024-11-29 03:06:49.084682] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:33.252 [2024-11-29 03:06:49.084690] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:33.252 [2024-11-29 03:06:49.084697] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:33.252 [2024-11-29 03:06:49.084708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:33.252 [2024-11-29 03:06:49.084716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:33.252 [2024-11-29 03:06:49.084723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:33.252 [2024-11-29 03:06:49.084729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:33.252 [2024-11-29 03:06:49.084737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.252 [2024-11-29 03:06:49.084745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:33.252 [2024-11-29 03:06:49.084754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:19:33.252 [2024-11-29 03:06:49.084761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.087160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.252 [2024-11-29 03:06:49.087201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:33.252 [2024-11-29 03:06:49.087212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.380 ms 00:19:33.252 [2024-11-29 03:06:49.087226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.087371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.252 [2024-11-29 03:06:49.087380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:33.252 [2024-11-29 03:06:49.087390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:33.252 [2024-11-29 03:06:49.087397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.095290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.252 [2024-11-29 03:06:49.095338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:33.252 [2024-11-29 03:06:49.095349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.252 [2024-11-29 03:06:49.095364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.095436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.252 [2024-11-29 03:06:49.095445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:33.252 [2024-11-29 03:06:49.095454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.252 [2024-11-29 03:06:49.095464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.252 [2024-11-29 03:06:49.095520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.253 [2024-11-29 03:06:49.095530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:33.253 [2024-11-29 03:06:49.095538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.253 [2024-11-29 03:06:49.095546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.253 [2024-11-29 03:06:49.095569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.253 [2024-11-29 03:06:49.095582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:33.253 [2024-11-29 03:06:49.095590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.253 [2024-11-29 03:06:49.095602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.253 [2024-11-29 03:06:49.109419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.253 [2024-11-29 03:06:49.109472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:33.253 [2024-11-29 03:06:49.109484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.253 [2024-11-29 03:06:49.109501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.253 [2024-11-29 03:06:49.119720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.253 [2024-11-29 03:06:49.119770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:33.253 [2024-11-29 03:06:49.119790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.253 [2024-11-29 03:06:49.119799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.253 [2024-11-29 03:06:49.119873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.253 [2024-11-29 03:06:49.119883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:33.253 [2024-11-29 03:06:49.119893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.253 [2024-11-29 03:06:49.119901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.253 [2024-11-29 03:06:49.119935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.253 [2024-11-29 03:06:49.119947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:33.253 [2024-11-29 03:06:49.119956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.253 [2024-11-29 03:06:49.119964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.253 [2024-11-29 03:06:49.120051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.253 [2024-11-29 03:06:49.120061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:33.253 [2024-11-29 03:06:49.120077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.253 [2024-11-29 03:06:49.120084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.253 [2024-11-29 03:06:49.120128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.253 [2024-11-29 03:06:49.120141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:33.253 [2024-11-29 03:06:49.120149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.253 [2024-11-29 03:06:49.120157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.253 [2024-11-29 03:06:49.120200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.253 [2024-11-29 03:06:49.120222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:33.253 [2024-11-29 03:06:49.120230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.253 [2024-11-29 03:06:49.120238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.253 [2024-11-29 03:06:49.120292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.253 [2024-11-29 03:06:49.120306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:33.253 [2024-11-29 03:06:49.120313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.253 [2024-11-29 03:06:49.120321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.253 [2024-11-29 03:06:49.120475] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.512 ms, result 0 00:19:33.513 00:19:33.513 00:19:33.513 03:06:49 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:34.084 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:34.085 03:06:49 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:34.085 03:06:49 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:34.085 03:06:49 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:34.085 03:06:49 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:34.085 03:06:49 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:34.085 03:06:49 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:34.085 03:06:49 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 87821 00:19:34.085 03:06:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87821 ']' 00:19:34.085 03:06:49 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87821 00:19:34.085 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87821) - No such process 00:19:34.085 Process with pid 87821 is not found 00:19:34.085 03:06:49 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 87821 is not found' 00:19:34.085 00:19:34.085 real 1m24.057s 00:19:34.085 user 1m38.242s 00:19:34.085 sys 0m14.337s 00:19:34.085 03:06:49 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.085 ************************************ 00:19:34.085 END TEST ftl_trim 00:19:34.085 ************************************ 00:19:34.085 03:06:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:34.085 03:06:50 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:34.085 03:06:50 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:34.085 03:06:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.085 03:06:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:34.085 ************************************ 00:19:34.085 START TEST ftl_restore 00:19:34.085 ************************************ 00:19:34.085 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:34.346 * Looking for test storage... 00:19:34.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:34.346 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:34.346 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:19:34.346 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:34.346 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.346 03:06:50 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:19:34.346 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.346 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:34.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.346 --rc genhtml_branch_coverage=1 00:19:34.346 --rc genhtml_function_coverage=1 00:19:34.346 --rc genhtml_legend=1 00:19:34.346 --rc geninfo_all_blocks=1 00:19:34.346 --rc geninfo_unexecuted_blocks=1 00:19:34.346 00:19:34.346 ' 00:19:34.346 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:34.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.346 --rc genhtml_branch_coverage=1 00:19:34.346 --rc genhtml_function_coverage=1 00:19:34.346 --rc genhtml_legend=1 00:19:34.346 --rc geninfo_all_blocks=1 00:19:34.346 --rc geninfo_unexecuted_blocks=1 00:19:34.346 00:19:34.346 ' 00:19:34.346 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:34.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.346 --rc genhtml_branch_coverage=1 00:19:34.346 --rc genhtml_function_coverage=1 00:19:34.346 --rc genhtml_legend=1 00:19:34.346 --rc geninfo_all_blocks=1 00:19:34.346 --rc geninfo_unexecuted_blocks=1 00:19:34.346 00:19:34.346 ' 00:19:34.346 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:34.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.346 --rc genhtml_branch_coverage=1 00:19:34.346 --rc genhtml_function_coverage=1 00:19:34.346 --rc genhtml_legend=1 00:19:34.346 --rc geninfo_all_blocks=1 00:19:34.346 --rc geninfo_unexecuted_blocks=1 00:19:34.346 00:19:34.346 ' 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.6f1rtDlCSH 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=88121 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 88121 00:19:34.346 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 88121 ']' 00:19:34.346 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.346 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.346 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.346 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.346 03:06:50 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:34.346 03:06:50 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.346 [2024-11-29 03:06:50.291528] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:34.346 [2024-11-29 03:06:50.291682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88121 ] 00:19:34.606 [2024-11-29 03:06:50.437109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.606 [2024-11-29 03:06:50.468298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.177 03:06:51 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.177 03:06:51 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:19:35.177 03:06:51 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:35.177 03:06:51 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:35.177 03:06:51 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:35.177 03:06:51 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:35.177 03:06:51 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:35.177 03:06:51 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:35.779 03:06:51 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:35.779 03:06:51 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:35.779 03:06:51 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:35.779 03:06:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:35.779 03:06:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:35.779 03:06:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:35.779 03:06:51 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:35.779 03:06:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:35.779 03:06:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:35.779 { 00:19:35.779 "name": "nvme0n1", 00:19:35.779 "aliases": [ 00:19:35.779 "4036939d-6be0-45d0-90f0-c5387b0db199" 00:19:35.779 ], 00:19:35.779 "product_name": "NVMe disk", 00:19:35.779 "block_size": 4096, 00:19:35.779 "num_blocks": 1310720, 00:19:35.779 "uuid": "4036939d-6be0-45d0-90f0-c5387b0db199", 00:19:35.779 "numa_id": -1, 00:19:35.779 "assigned_rate_limits": { 00:19:35.779 "rw_ios_per_sec": 0, 00:19:35.779 "rw_mbytes_per_sec": 0, 00:19:35.779 "r_mbytes_per_sec": 0, 00:19:35.779 "w_mbytes_per_sec": 0 00:19:35.779 }, 00:19:35.779 "claimed": true, 00:19:35.779 "claim_type": "read_many_write_one", 00:19:35.779 "zoned": false, 00:19:35.779 "supported_io_types": { 00:19:35.779 "read": true, 00:19:35.779 "write": true, 00:19:35.779 "unmap": true, 00:19:35.779 "flush": true, 00:19:35.779 "reset": true, 00:19:35.779 "nvme_admin": true, 00:19:35.779 "nvme_io": true, 00:19:35.779 "nvme_io_md": false, 00:19:35.779 "write_zeroes": true, 00:19:35.779 "zcopy": false, 00:19:35.779 "get_zone_info": false, 00:19:35.779 "zone_management": false, 00:19:35.779 "zone_append": false, 00:19:35.779 "compare": true, 00:19:35.779 "compare_and_write": false, 00:19:35.779 "abort": true, 00:19:35.779 "seek_hole": false, 00:19:35.779 "seek_data": false, 00:19:35.779 "copy": true, 00:19:35.779 "nvme_iov_md": false 00:19:35.780 }, 00:19:35.780 "driver_specific": { 00:19:35.780 "nvme": [ 00:19:35.780 { 00:19:35.780 "pci_address": "0000:00:11.0", 00:19:35.780 "trid": { 00:19:35.780 "trtype": "PCIe", 00:19:35.780 "traddr": "0000:00:11.0" 00:19:35.780 }, 00:19:35.780 "ctrlr_data": { 00:19:35.780 "cntlid": 0, 00:19:35.780 "vendor_id": "0x1b36", 00:19:35.780 "model_number": "QEMU NVMe Ctrl", 00:19:35.780 "serial_number": "12341", 00:19:35.780 "firmware_revision": "8.0.0", 00:19:35.780 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:35.780 "oacs": { 00:19:35.780 "security": 0, 00:19:35.780 "format": 1, 00:19:35.780 "firmware": 0, 00:19:35.780 "ns_manage": 1 00:19:35.780 }, 00:19:35.780 "multi_ctrlr": false, 00:19:35.780 "ana_reporting": false 00:19:35.780 }, 00:19:35.780 "vs": { 00:19:35.780 "nvme_version": "1.4" 00:19:35.780 }, 00:19:35.780 "ns_data": { 00:19:35.780 "id": 1, 00:19:35.780 "can_share": false 00:19:35.780 } 00:19:35.780 } 00:19:35.780 ], 00:19:35.780 "mp_policy": "active_passive" 00:19:35.780 } 00:19:35.780 } 00:19:35.780 ]' 00:19:35.780 03:06:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:35.780 03:06:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:35.780 03:06:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:35.780 03:06:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:35.780 03:06:51 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:35.780 03:06:51 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:19:35.780 03:06:51 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:35.780 03:06:51 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:35.780 03:06:51 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:35.780 03:06:51 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:35.780 03:06:51 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:36.041 03:06:51 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=2119d3de-3c85-4291-867e-b9fae5b7ef89 00:19:36.041 03:06:51 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:36.041 03:06:51 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2119d3de-3c85-4291-867e-b9fae5b7ef89 00:19:36.303 03:06:52 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:36.564 03:06:52 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=af25ed71-5bd7-40ba-ba69-5338cfdb4042 00:19:36.564 03:06:52 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u af25ed71-5bd7-40ba-ba69-5338cfdb4042 00:19:36.825 03:06:52 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=8b1fa1f1-e1a6-4742-a721-8ad1ada5757e 00:19:36.825 03:06:52 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:36.825 03:06:52 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8b1fa1f1-e1a6-4742-a721-8ad1ada5757e 00:19:36.825 03:06:52 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:36.825 03:06:52 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:36.825 03:06:52 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=8b1fa1f1-e1a6-4742-a721-8ad1ada5757e 00:19:36.825 03:06:52 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:36.825 03:06:52 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 8b1fa1f1-e1a6-4742-a721-8ad1ada5757e 00:19:36.825 03:06:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8b1fa1f1-e1a6-4742-a721-8ad1ada5757e 00:19:36.825 03:06:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:36.825 03:06:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:36.825 03:06:52 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:36.825 03:06:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b1fa1f1-e1a6-4742-a721-8ad1ada5757e 00:19:37.086 03:06:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:37.086 { 00:19:37.086 "name": "8b1fa1f1-e1a6-4742-a721-8ad1ada5757e", 00:19:37.086 "aliases": [ 00:19:37.086 "lvs/nvme0n1p0" 00:19:37.086 ], 00:19:37.086 "product_name": "Logical Volume", 00:19:37.086 "block_size": 4096, 00:19:37.086 "num_blocks": 26476544, 00:19:37.086 "uuid": "8b1fa1f1-e1a6-4742-a721-8ad1ada5757e", 00:19:37.086 "assigned_rate_limits": { 00:19:37.086 "rw_ios_per_sec": 0, 00:19:37.086 "rw_mbytes_per_sec": 0, 00:19:37.086 "r_mbytes_per_sec": 0, 00:19:37.086 "w_mbytes_per_sec": 0 00:19:37.086 }, 00:19:37.086 "claimed": false, 00:19:37.086 "zoned": false, 00:19:37.086 "supported_io_types": { 00:19:37.086 "read": true, 00:19:37.086 "write": true, 00:19:37.086 "unmap": true, 00:19:37.086 "flush": false, 00:19:37.086 "reset": true, 00:19:37.086 "nvme_admin": false, 00:19:37.086 "nvme_io": false, 00:19:37.086 "nvme_io_md": false, 00:19:37.086 "write_zeroes": true, 00:19:37.086 "zcopy": false, 00:19:37.086 "get_zone_info": false, 00:19:37.086 "zone_management": false, 00:19:37.086 "zone_append": false, 00:19:37.086 "compare": false, 00:19:37.086 "compare_and_write": false, 00:19:37.086 "abort": false, 00:19:37.086 "seek_hole": true, 00:19:37.086 "seek_data": true, 00:19:37.086 "copy": false, 00:19:37.086 "nvme_iov_md": false 00:19:37.086 }, 00:19:37.086 "driver_specific": { 00:19:37.086 "lvol": { 00:19:37.086 "lvol_store_uuid": "af25ed71-5bd7-40ba-ba69-5338cfdb4042", 00:19:37.086 "base_bdev": "nvme0n1", 00:19:37.086 "thin_provision": true, 00:19:37.086 "num_allocated_clusters": 0, 00:19:37.086 "snapshot": false, 00:19:37.086 "clone": false, 00:19:37.086 "esnap_clone": false 00:19:37.086 } 00:19:37.086 } 00:19:37.086 } 00:19:37.086 ]' 00:19:37.086 03:06:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:37.086 03:06:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:37.086 03:06:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:37.086 03:06:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:37.086 03:06:52 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:37.086 03:06:52 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:37.086 03:06:52 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:37.087 03:06:52 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:37.087 03:06:52 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:37.347 03:06:53 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:37.347 03:06:53 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:37.347 03:06:53 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 8b1fa1f1-e1a6-4742-a721-8ad1ada5757e 00:19:37.347 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8b1fa1f1-e1a6-4742-a721-8ad1ada5757e 00:19:37.347 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:37.347 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:37.347 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:37.347 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b1fa1f1-e1a6-4742-a721-8ad1ada5757e 00:19:37.609 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:37.609 { 00:19:37.609 "name": "8b1fa1f1-e1a6-4742-a721-8ad1ada5757e", 00:19:37.609 "aliases": [ 00:19:37.609 "lvs/nvme0n1p0" 00:19:37.609 ], 00:19:37.609 "product_name": "Logical Volume", 00:19:37.609 "block_size": 4096, 00:19:37.609 "num_blocks": 26476544, 00:19:37.609 "uuid": "8b1fa1f1-e1a6-4742-a721-8ad1ada5757e", 00:19:37.609 "assigned_rate_limits": { 00:19:37.609 "rw_ios_per_sec": 0, 00:19:37.609 "rw_mbytes_per_sec": 0, 00:19:37.609 "r_mbytes_per_sec": 0, 00:19:37.609 "w_mbytes_per_sec": 0 00:19:37.609 }, 00:19:37.609 "claimed": false, 00:19:37.609 "zoned": false, 00:19:37.609 "supported_io_types": { 00:19:37.609 "read": true, 00:19:37.609 "write": true, 00:19:37.609 "unmap": true, 00:19:37.609 "flush": false, 00:19:37.609 "reset": true, 00:19:37.609 "nvme_admin": false, 00:19:37.609 "nvme_io": false, 00:19:37.609 "nvme_io_md": false, 00:19:37.609 "write_zeroes": true, 00:19:37.609 "zcopy": false, 00:19:37.609 "get_zone_info": false, 00:19:37.609 "zone_management": false, 00:19:37.609 "zone_append": false, 00:19:37.609 "compare": false, 00:19:37.609 "compare_and_write": false, 00:19:37.609 "abort": false, 00:19:37.609 "seek_hole": true, 00:19:37.609 "seek_data": true, 00:19:37.609 "copy": false, 00:19:37.609 "nvme_iov_md": false 00:19:37.609 }, 00:19:37.609 "driver_specific": { 00:19:37.609 "lvol": { 00:19:37.609 "lvol_store_uuid": "af25ed71-5bd7-40ba-ba69-5338cfdb4042", 00:19:37.609 "base_bdev": "nvme0n1", 00:19:37.609 "thin_provision": true, 00:19:37.609 "num_allocated_clusters": 0, 00:19:37.609 "snapshot": false, 00:19:37.609 "clone": false, 00:19:37.609 "esnap_clone": false 00:19:37.609 } 00:19:37.609 } 00:19:37.609 } 00:19:37.609 ]' 00:19:37.609 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:37.609 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:37.609 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:37.609 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:37.609 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:37.609 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:37.609 03:06:53 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:37.609 03:06:53 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:37.871 03:06:53 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:37.871 03:06:53 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 8b1fa1f1-e1a6-4742-a721-8ad1ada5757e 00:19:37.871 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8b1fa1f1-e1a6-4742-a721-8ad1ada5757e 00:19:37.871 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:37.871 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:37.871 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:37.871 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b1fa1f1-e1a6-4742-a721-8ad1ada5757e 00:19:38.133 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:38.133 { 00:19:38.133 "name": "8b1fa1f1-e1a6-4742-a721-8ad1ada5757e", 00:19:38.133 "aliases": [ 00:19:38.133 "lvs/nvme0n1p0" 00:19:38.133 ], 00:19:38.133 "product_name": "Logical Volume", 00:19:38.133 "block_size": 4096, 00:19:38.133 "num_blocks": 26476544, 00:19:38.133 "uuid": "8b1fa1f1-e1a6-4742-a721-8ad1ada5757e", 00:19:38.133 "assigned_rate_limits": { 00:19:38.133 "rw_ios_per_sec": 0, 00:19:38.133 "rw_mbytes_per_sec": 0, 00:19:38.133 "r_mbytes_per_sec": 0, 00:19:38.133 "w_mbytes_per_sec": 0 00:19:38.133 }, 00:19:38.133 "claimed": false, 00:19:38.133 "zoned": false, 00:19:38.133 "supported_io_types": { 00:19:38.133 "read": true, 00:19:38.133 "write": true, 00:19:38.133 "unmap": true, 00:19:38.133 "flush": false, 00:19:38.133 "reset": true, 00:19:38.133 "nvme_admin": false, 00:19:38.133 "nvme_io": false, 00:19:38.133 "nvme_io_md": false, 00:19:38.133 "write_zeroes": true, 00:19:38.133 "zcopy": false, 00:19:38.133 "get_zone_info": false, 00:19:38.133 "zone_management": false, 00:19:38.133 "zone_append": false, 00:19:38.133 "compare": false, 00:19:38.133 "compare_and_write": false, 00:19:38.133 "abort": false, 00:19:38.133 "seek_hole": true, 00:19:38.133 "seek_data": true, 00:19:38.133 "copy": false, 00:19:38.133 "nvme_iov_md": false 00:19:38.133 }, 00:19:38.133 "driver_specific": { 00:19:38.133 "lvol": { 00:19:38.133 "lvol_store_uuid": "af25ed71-5bd7-40ba-ba69-5338cfdb4042", 00:19:38.133 "base_bdev": "nvme0n1", 00:19:38.133 "thin_provision": true, 00:19:38.133 "num_allocated_clusters": 0, 00:19:38.133 "snapshot": false, 00:19:38.134 "clone": false, 00:19:38.134 "esnap_clone": false 00:19:38.134 } 00:19:38.134 } 00:19:38.134 } 00:19:38.134 ]' 00:19:38.134 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:38.134 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:38.134 03:06:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:38.134 03:06:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:38.134 03:06:54 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:38.134 03:06:54 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:38.134 03:06:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:38.134 03:06:54 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8b1fa1f1-e1a6-4742-a721-8ad1ada5757e --l2p_dram_limit 10' 00:19:38.134 03:06:54 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:38.134 03:06:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:38.134 03:06:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:38.134 03:06:54 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:38.134 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:38.134 03:06:54 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8b1fa1f1-e1a6-4742-a721-8ad1ada5757e --l2p_dram_limit 10 -c nvc0n1p0 00:19:38.396 [2024-11-29 03:06:54.217819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.396 [2024-11-29 03:06:54.217908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:38.396 [2024-11-29 03:06:54.217929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:38.396 [2024-11-29 03:06:54.217940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.396 [2024-11-29 03:06:54.218013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.396 [2024-11-29 03:06:54.218031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.396 [2024-11-29 03:06:54.218040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:38.396 [2024-11-29 03:06:54.218056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.396 [2024-11-29 03:06:54.218077] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:38.397 [2024-11-29 03:06:54.218421] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:38.397 [2024-11-29 03:06:54.218448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.397 [2024-11-29 03:06:54.218459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.397 [2024-11-29 03:06:54.218470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:19:38.397 [2024-11-29 03:06:54.218485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.397 [2024-11-29 03:06:54.218563] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4399c0a4-156c-4cb4-889f-ab275081974c 00:19:38.397 [2024-11-29 03:06:54.220309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.397 [2024-11-29 03:06:54.220367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:38.397 [2024-11-29 03:06:54.220381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:38.397 [2024-11-29 03:06:54.220389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.397 [2024-11-29 03:06:54.229133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.397 [2024-11-29 03:06:54.229178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.397 [2024-11-29 03:06:54.229193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.691 ms 00:19:38.397 [2024-11-29 03:06:54.229201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.397 [2024-11-29 03:06:54.229298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.397 [2024-11-29 03:06:54.229306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.397 [2024-11-29 03:06:54.229319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:38.397 [2024-11-29 03:06:54.229327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.397 [2024-11-29 03:06:54.229400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.397 [2024-11-29 03:06:54.229411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:38.397 [2024-11-29 03:06:54.229422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:38.397 [2024-11-29 03:06:54.229429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.397 [2024-11-29 03:06:54.229456] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:38.397 [2024-11-29 03:06:54.231728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.397 [2024-11-29 03:06:54.231776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.397 [2024-11-29 03:06:54.231787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.281 ms 00:19:38.397 [2024-11-29 03:06:54.231797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.397 [2024-11-29 03:06:54.231854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.397 [2024-11-29 03:06:54.231867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:38.397 [2024-11-29 03:06:54.231876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:38.397 [2024-11-29 03:06:54.231888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.397 [2024-11-29 03:06:54.231907] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:38.397 [2024-11-29 03:06:54.232060] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:38.397 [2024-11-29 03:06:54.232073] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:38.397 [2024-11-29 03:06:54.232087] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:38.397 [2024-11-29 03:06:54.232098] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:38.397 [2024-11-29 03:06:54.232112] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:38.397 [2024-11-29 03:06:54.232121] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:38.397 [2024-11-29 03:06:54.232135] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:38.397 [2024-11-29 03:06:54.232143] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:38.397 [2024-11-29 03:06:54.232153] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:38.397 [2024-11-29 03:06:54.232161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.397 [2024-11-29 03:06:54.232171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:38.397 [2024-11-29 03:06:54.232180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:38.397 [2024-11-29 03:06:54.232193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.397 [2024-11-29 03:06:54.232277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.397 [2024-11-29 03:06:54.232297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:38.397 [2024-11-29 03:06:54.232306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:38.397 [2024-11-29 03:06:54.232318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.397 [2024-11-29 03:06:54.232414] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:38.397 [2024-11-29 03:06:54.232428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:38.397 [2024-11-29 03:06:54.232437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.397 [2024-11-29 03:06:54.232448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.397 [2024-11-29 03:06:54.232458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:38.397 [2024-11-29 03:06:54.232468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:38.397 [2024-11-29 03:06:54.232476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:38.397 [2024-11-29 03:06:54.232486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:38.397 [2024-11-29 03:06:54.232494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:38.397 [2024-11-29 03:06:54.232504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.397 [2024-11-29 03:06:54.232512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:38.397 [2024-11-29 03:06:54.232521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:38.397 [2024-11-29 03:06:54.232529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.397 [2024-11-29 03:06:54.232545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:38.397 [2024-11-29 03:06:54.232554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:38.397 [2024-11-29 03:06:54.232566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.397 [2024-11-29 03:06:54.232575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:38.397 [2024-11-29 03:06:54.232585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:38.397 [2024-11-29 03:06:54.232593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.397 [2024-11-29 03:06:54.232604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:38.397 [2024-11-29 03:06:54.232612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:38.397 [2024-11-29 03:06:54.232622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.397 [2024-11-29 03:06:54.232629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:38.397 [2024-11-29 03:06:54.232639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:38.397 [2024-11-29 03:06:54.232647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.397 [2024-11-29 03:06:54.232657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:38.397 [2024-11-29 03:06:54.232665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:38.397 [2024-11-29 03:06:54.232675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.397 [2024-11-29 03:06:54.232682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:38.397 [2024-11-29 03:06:54.232694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:38.397 [2024-11-29 03:06:54.232701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.397 [2024-11-29 03:06:54.232710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:38.397 [2024-11-29 03:06:54.232718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:38.397 [2024-11-29 03:06:54.232727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.397 [2024-11-29 03:06:54.232736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:38.397 [2024-11-29 03:06:54.232745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:38.397 [2024-11-29 03:06:54.232752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.397 [2024-11-29 03:06:54.232762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:38.397 [2024-11-29 03:06:54.232770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:38.397 [2024-11-29 03:06:54.232781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.397 [2024-11-29 03:06:54.232788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:38.397 [2024-11-29 03:06:54.232799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:38.397 [2024-11-29 03:06:54.232806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.397 [2024-11-29 03:06:54.232816] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:38.397 [2024-11-29 03:06:54.232847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:38.397 [2024-11-29 03:06:54.232859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.397 [2024-11-29 03:06:54.232868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.397 [2024-11-29 03:06:54.232883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:38.397 [2024-11-29 03:06:54.232891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:38.397 [2024-11-29 03:06:54.232900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:38.397 [2024-11-29 03:06:54.232907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:38.397 [2024-11-29 03:06:54.232916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:38.397 [2024-11-29 03:06:54.232923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:38.397 [2024-11-29 03:06:54.232936] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:38.398 [2024-11-29 03:06:54.232946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.398 [2024-11-29 03:06:54.232957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:38.398 [2024-11-29 03:06:54.232965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:38.398 [2024-11-29 03:06:54.232975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:38.398 [2024-11-29 03:06:54.232982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:38.398 [2024-11-29 03:06:54.232992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:38.398 [2024-11-29 03:06:54.233000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:38.398 [2024-11-29 03:06:54.233013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:38.398 [2024-11-29 03:06:54.233021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:38.398 [2024-11-29 03:06:54.233030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:38.398 [2024-11-29 03:06:54.233039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:38.398 [2024-11-29 03:06:54.233049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:38.398 [2024-11-29 03:06:54.233056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:38.398 [2024-11-29 03:06:54.233066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:38.398 [2024-11-29 03:06:54.233073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:38.398 [2024-11-29 03:06:54.233082] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:38.398 [2024-11-29 03:06:54.233090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.398 [2024-11-29 03:06:54.233101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:38.398 [2024-11-29 03:06:54.233108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:38.398 [2024-11-29 03:06:54.233118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:38.398 [2024-11-29 03:06:54.233125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:38.398 [2024-11-29 03:06:54.233134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.398 [2024-11-29 03:06:54.233142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:38.398 [2024-11-29 03:06:54.233154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:19:38.398 [2024-11-29 03:06:54.233162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.398 [2024-11-29 03:06:54.233205] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:38.398 [2024-11-29 03:06:54.233216] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:42.606 [2024-11-29 03:06:58.162352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.606 [2024-11-29 03:06:58.162443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:42.606 [2024-11-29 03:06:58.162471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3929.121 ms 00:19:42.606 [2024-11-29 03:06:58.162486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.606 [2024-11-29 03:06:58.176342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.606 [2024-11-29 03:06:58.176409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:42.606 [2024-11-29 03:06:58.176434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.695 ms 00:19:42.606 [2024-11-29 03:06:58.176447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.606 [2024-11-29 03:06:58.176638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.606 [2024-11-29 03:06:58.176656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:42.606 [2024-11-29 03:06:58.176674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:42.606 [2024-11-29 03:06:58.176687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.606 [2024-11-29 03:06:58.189406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.606 [2024-11-29 03:06:58.189461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:42.606 [2024-11-29 03:06:58.189480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.651 ms 00:19:42.606 [2024-11-29 03:06:58.189503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.606 [2024-11-29 03:06:58.189551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.606 [2024-11-29 03:06:58.189578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:42.606 [2024-11-29 03:06:58.189595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:42.606 [2024-11-29 03:06:58.189606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.606 [2024-11-29 03:06:58.190264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.606 [2024-11-29 03:06:58.190318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:42.606 [2024-11-29 03:06:58.190337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:19:42.606 [2024-11-29 03:06:58.190351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.606 [2024-11-29 03:06:58.190529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.606 [2024-11-29 03:06:58.190543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:42.606 [2024-11-29 03:06:58.190558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:19:42.606 [2024-11-29 03:06:58.190570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.606 [2024-11-29 03:06:58.199555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.606 [2024-11-29 03:06:58.199608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:42.606 [2024-11-29 03:06:58.199632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.950 ms 00:19:42.606 [2024-11-29 03:06:58.199643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.606 [2024-11-29 03:06:58.219599] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:42.606 [2024-11-29 03:06:58.223852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.606 [2024-11-29 03:06:58.223913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:42.606 [2024-11-29 03:06:58.223930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.092 ms 00:19:42.606 [2024-11-29 03:06:58.223945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.606 [2024-11-29 03:06:58.307387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.606 [2024-11-29 03:06:58.307468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:42.606 [2024-11-29 03:06:58.307488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.389 ms 00:19:42.606 [2024-11-29 03:06:58.307507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.606 [2024-11-29 03:06:58.307787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.606 [2024-11-29 03:06:58.307820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:42.606 [2024-11-29 03:06:58.307858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:19:42.606 [2024-11-29 03:06:58.307876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.606 [2024-11-29 03:06:58.313755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.606 [2024-11-29 03:06:58.313818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:42.606 [2024-11-29 03:06:58.313853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.843 ms 00:19:42.606 [2024-11-29 03:06:58.313868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.606 [2024-11-29 03:06:58.318871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.606 [2024-11-29 03:06:58.318928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:42.606 [2024-11-29 03:06:58.318943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.944 ms 00:19:42.606 [2024-11-29 03:06:58.318957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.606 [2024-11-29 03:06:58.319340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.606 [2024-11-29 03:06:58.319374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:42.607 [2024-11-29 03:06:58.319390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:19:42.607 [2024-11-29 03:06:58.319407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.607 [2024-11-29 03:06:58.360026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.607 [2024-11-29 03:06:58.360082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:42.607 [2024-11-29 03:06:58.360102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.582 ms 00:19:42.607 [2024-11-29 03:06:58.360117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.607 [2024-11-29 03:06:58.366868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.607 [2024-11-29 03:06:58.366932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:42.607 [2024-11-29 03:06:58.366948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.654 ms 00:19:42.607 [2024-11-29 03:06:58.366963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.607 [2024-11-29 03:06:58.372682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.607 [2024-11-29 03:06:58.372740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:42.607 [2024-11-29 03:06:58.372754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.661 ms 00:19:42.607 [2024-11-29 03:06:58.372768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.607 [2024-11-29 03:06:58.379147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.607 [2024-11-29 03:06:58.379207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:42.607 [2024-11-29 03:06:58.379221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.323 ms 00:19:42.607 [2024-11-29 03:06:58.379238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.607 [2024-11-29 03:06:58.379302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.607 [2024-11-29 03:06:58.379320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:42.607 [2024-11-29 03:06:58.379335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:42.607 [2024-11-29 03:06:58.379351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.607 [2024-11-29 03:06:58.379455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.607 [2024-11-29 03:06:58.379480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:42.607 [2024-11-29 03:06:58.379495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:42.607 [2024-11-29 03:06:58.379515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.607 [2024-11-29 03:06:58.380792] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4162.486 ms, result 0 00:19:42.607 { 00:19:42.607 "name": "ftl0", 00:19:42.607 "uuid": "4399c0a4-156c-4cb4-889f-ab275081974c" 00:19:42.607 } 00:19:42.607 03:06:58 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:42.607 03:06:58 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:42.870 03:06:58 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:42.870 03:06:58 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:42.870 [2024-11-29 03:06:58.802458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.870 [2024-11-29 03:06:58.802519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:42.871 [2024-11-29 03:06:58.802544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:42.871 [2024-11-29 03:06:58.802556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.871 [2024-11-29 03:06:58.802594] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:42.871 [2024-11-29 03:06:58.803396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.871 [2024-11-29 03:06:58.803462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:42.871 [2024-11-29 03:06:58.803479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:19:42.871 [2024-11-29 03:06:58.803497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.871 [2024-11-29 03:06:58.803862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.871 [2024-11-29 03:06:58.803890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:42.871 [2024-11-29 03:06:58.803909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:19:42.871 [2024-11-29 03:06:58.803924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.871 [2024-11-29 03:06:58.807223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.871 [2024-11-29 03:06:58.807256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:42.871 [2024-11-29 03:06:58.807270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.273 ms 00:19:42.871 [2024-11-29 03:06:58.807284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.871 [2024-11-29 03:06:58.813568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.871 [2024-11-29 03:06:58.813621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:42.871 [2024-11-29 03:06:58.813637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.253 ms 00:19:42.871 [2024-11-29 03:06:58.813657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.871 [2024-11-29 03:06:58.816686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.871 [2024-11-29 03:06:58.816755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:42.871 [2024-11-29 03:06:58.816770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.912 ms 00:19:42.871 [2024-11-29 03:06:58.816783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.871 [2024-11-29 03:06:58.823194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.871 [2024-11-29 03:06:58.823255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:42.871 [2024-11-29 03:06:58.823271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.338 ms 00:19:42.871 [2024-11-29 03:06:58.823285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.871 [2024-11-29 03:06:58.823462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.871 [2024-11-29 03:06:58.823497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:42.871 [2024-11-29 03:06:58.823513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:19:42.871 [2024-11-29 03:06:58.823529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.871 [2024-11-29 03:06:58.826971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.871 [2024-11-29 03:06:58.827026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:42.871 [2024-11-29 03:06:58.827039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.415 ms 00:19:42.871 [2024-11-29 03:06:58.827053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.871 [2024-11-29 03:06:58.829595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.871 [2024-11-29 03:06:58.829654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:42.871 [2024-11-29 03:06:58.829669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.485 ms 00:19:42.871 [2024-11-29 03:06:58.829682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.871 [2024-11-29 03:06:58.832029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.871 [2024-11-29 03:06:58.832090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:42.871 [2024-11-29 03:06:58.832104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.280 ms 00:19:42.871 [2024-11-29 03:06:58.832116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.871 [2024-11-29 03:06:58.833995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.871 [2024-11-29 03:06:58.834056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:42.871 [2024-11-29 03:06:58.834071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.788 ms 00:19:42.871 [2024-11-29 03:06:58.834084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.871 [2024-11-29 03:06:58.834138] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:42.871 [2024-11-29 03:06:58.834164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:42.871 [2024-11-29 03:06:58.834772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.834787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.834800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.834817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.834847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.834865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.834878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.834894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.834908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.834929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.834942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.834961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.834984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:42.872 [2024-11-29 03:06:58.835724] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:42.872 [2024-11-29 03:06:58.835762] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4399c0a4-156c-4cb4-889f-ab275081974c 00:19:42.872 [2024-11-29 03:06:58.835784] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:42.872 [2024-11-29 03:06:58.835796] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:42.872 [2024-11-29 03:06:58.835815] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:42.872 [2024-11-29 03:06:58.835843] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:42.872 [2024-11-29 03:06:58.835861] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:42.872 [2024-11-29 03:06:58.835875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:42.872 [2024-11-29 03:06:58.835893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:42.872 [2024-11-29 03:06:58.835904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:42.872 [2024-11-29 03:06:58.835918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:42.872 [2024-11-29 03:06:58.835931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.872 [2024-11-29 03:06:58.835948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:42.872 [2024-11-29 03:06:58.835962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.794 ms 00:19:42.872 [2024-11-29 03:06:58.835977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.872 [2024-11-29 03:06:58.838526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.872 [2024-11-29 03:06:58.838585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:42.872 [2024-11-29 03:06:58.838607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.495 ms 00:19:42.872 [2024-11-29 03:06:58.838624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.872 [2024-11-29 03:06:58.838768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.872 [2024-11-29 03:06:58.838789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:42.872 [2024-11-29 03:06:58.838808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:42.872 [2024-11-29 03:06:58.838846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.872 [2024-11-29 03:06:58.847035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.872 [2024-11-29 03:06:58.847095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:42.872 [2024-11-29 03:06:58.847114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.872 [2024-11-29 03:06:58.847129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.872 [2024-11-29 03:06:58.847218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.872 [2024-11-29 03:06:58.847236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:42.872 [2024-11-29 03:06:58.847250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.872 [2024-11-29 03:06:58.847267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.872 [2024-11-29 03:06:58.847350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.872 [2024-11-29 03:06:58.847372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:42.872 [2024-11-29 03:06:58.847385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.872 [2024-11-29 03:06:58.847405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.872 [2024-11-29 03:06:58.847438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.873 [2024-11-29 03:06:58.847455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:42.873 [2024-11-29 03:06:58.847469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.873 [2024-11-29 03:06:58.847485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.135 [2024-11-29 03:06:58.861857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.135 [2024-11-29 03:06:58.861919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:43.135 [2024-11-29 03:06:58.861938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.135 [2024-11-29 03:06:58.861951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.135 [2024-11-29 03:06:58.873009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.135 [2024-11-29 03:06:58.873076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:43.135 [2024-11-29 03:06:58.873099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.135 [2024-11-29 03:06:58.873113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.135 [2024-11-29 03:06:58.873225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.135 [2024-11-29 03:06:58.873250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:43.135 [2024-11-29 03:06:58.873264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.135 [2024-11-29 03:06:58.873280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.135 [2024-11-29 03:06:58.873348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.135 [2024-11-29 03:06:58.873365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:43.135 [2024-11-29 03:06:58.873378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.135 [2024-11-29 03:06:58.873394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.135 [2024-11-29 03:06:58.873500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.135 [2024-11-29 03:06:58.873518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:43.135 [2024-11-29 03:06:58.873532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.135 [2024-11-29 03:06:58.873548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.135 [2024-11-29 03:06:58.873626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.135 [2024-11-29 03:06:58.873648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:43.135 [2024-11-29 03:06:58.873664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.135 [2024-11-29 03:06:58.873682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.135 [2024-11-29 03:06:58.873743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.135 [2024-11-29 03:06:58.873781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:43.135 [2024-11-29 03:06:58.873799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.135 [2024-11-29 03:06:58.873815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.135 [2024-11-29 03:06:58.873911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.135 [2024-11-29 03:06:58.873939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:43.135 [2024-11-29 03:06:58.873956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.135 [2024-11-29 03:06:58.873973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.135 [2024-11-29 03:06:58.874190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.659 ms, result 0 00:19:43.135 true 00:19:43.135 03:06:58 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 88121 00:19:43.135 03:06:58 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88121 ']' 00:19:43.135 03:06:58 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88121 00:19:43.135 03:06:58 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:19:43.135 03:06:58 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.135 03:06:58 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88121 00:19:43.135 03:06:58 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:43.135 killing process with pid 88121 00:19:43.135 03:06:58 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:43.135 03:06:58 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88121' 00:19:43.135 03:06:58 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 88121 00:19:43.135 03:06:58 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 88121 00:19:48.431 03:07:03 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:52.637 262144+0 records in 00:19:52.637 262144+0 records out 00:19:52.637 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.2713 s, 251 MB/s 00:19:52.637 03:07:07 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:54.020 03:07:09 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:54.020 [2024-11-29 03:07:10.006206] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:54.020 [2024-11-29 03:07:10.006321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88340 ] 00:19:54.307 [2024-11-29 03:07:10.151918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.307 [2024-11-29 03:07:10.173515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.307 [2024-11-29 03:07:10.288847] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:54.307 [2024-11-29 03:07:10.288937] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:54.570 [2024-11-29 03:07:10.449351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.570 [2024-11-29 03:07:10.449413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:54.570 [2024-11-29 03:07:10.449428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:54.571 [2024-11-29 03:07:10.449437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.571 [2024-11-29 03:07:10.449494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.571 [2024-11-29 03:07:10.449505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.571 [2024-11-29 03:07:10.449514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:54.571 [2024-11-29 03:07:10.449528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.571 [2024-11-29 03:07:10.449557] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:54.571 [2024-11-29 03:07:10.449911] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:54.571 [2024-11-29 03:07:10.449932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.571 [2024-11-29 03:07:10.449941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.571 [2024-11-29 03:07:10.449957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:19:54.571 [2024-11-29 03:07:10.449965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.571 [2024-11-29 03:07:10.451605] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:54.571 [2024-11-29 03:07:10.455277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.571 [2024-11-29 03:07:10.455329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:54.571 [2024-11-29 03:07:10.455341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.674 ms 00:19:54.571 [2024-11-29 03:07:10.455356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.571 [2024-11-29 03:07:10.455426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.571 [2024-11-29 03:07:10.455439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:54.571 [2024-11-29 03:07:10.455447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:54.571 [2024-11-29 03:07:10.455461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.571 [2024-11-29 03:07:10.463407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.571 [2024-11-29 03:07:10.463453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.571 [2024-11-29 03:07:10.463469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.901 ms 00:19:54.571 [2024-11-29 03:07:10.463478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.571 [2024-11-29 03:07:10.463576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.571 [2024-11-29 03:07:10.463587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.571 [2024-11-29 03:07:10.463596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:54.571 [2024-11-29 03:07:10.463607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.571 [2024-11-29 03:07:10.463665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.571 [2024-11-29 03:07:10.463676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:54.571 [2024-11-29 03:07:10.463684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:54.571 [2024-11-29 03:07:10.463695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.571 [2024-11-29 03:07:10.463717] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:54.571 [2024-11-29 03:07:10.465879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.571 [2024-11-29 03:07:10.465919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.571 [2024-11-29 03:07:10.465930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.166 ms 00:19:54.571 [2024-11-29 03:07:10.465937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.571 [2024-11-29 03:07:10.465972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.571 [2024-11-29 03:07:10.465981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:54.571 [2024-11-29 03:07:10.465993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:54.571 [2024-11-29 03:07:10.466004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.571 [2024-11-29 03:07:10.466029] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:54.571 [2024-11-29 03:07:10.466051] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:54.571 [2024-11-29 03:07:10.466092] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:54.571 [2024-11-29 03:07:10.466108] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:54.571 [2024-11-29 03:07:10.466214] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:54.571 [2024-11-29 03:07:10.466225] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:54.571 [2024-11-29 03:07:10.466240] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:54.571 [2024-11-29 03:07:10.466251] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:54.571 [2024-11-29 03:07:10.466260] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:54.571 [2024-11-29 03:07:10.466269] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:54.571 [2024-11-29 03:07:10.466276] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:54.571 [2024-11-29 03:07:10.466284] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:54.571 [2024-11-29 03:07:10.466291] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:54.571 [2024-11-29 03:07:10.466299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.571 [2024-11-29 03:07:10.466306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:54.571 [2024-11-29 03:07:10.466315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:19:54.571 [2024-11-29 03:07:10.466325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.571 [2024-11-29 03:07:10.466412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.571 [2024-11-29 03:07:10.466428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:54.571 [2024-11-29 03:07:10.466435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:54.571 [2024-11-29 03:07:10.466443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.571 [2024-11-29 03:07:10.466545] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:54.571 [2024-11-29 03:07:10.466565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:54.571 [2024-11-29 03:07:10.466579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.571 [2024-11-29 03:07:10.466588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.571 [2024-11-29 03:07:10.466599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:54.571 [2024-11-29 03:07:10.466607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:54.571 [2024-11-29 03:07:10.466616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:54.571 [2024-11-29 03:07:10.466624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:54.571 [2024-11-29 03:07:10.466633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:54.571 [2024-11-29 03:07:10.466640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.571 [2024-11-29 03:07:10.466650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:54.571 [2024-11-29 03:07:10.466657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:54.571 [2024-11-29 03:07:10.466665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.571 [2024-11-29 03:07:10.466673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:54.571 [2024-11-29 03:07:10.466684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:54.571 [2024-11-29 03:07:10.466692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.571 [2024-11-29 03:07:10.466700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:54.571 [2024-11-29 03:07:10.466708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:54.571 [2024-11-29 03:07:10.466716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.571 [2024-11-29 03:07:10.466724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:54.571 [2024-11-29 03:07:10.466732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:54.571 [2024-11-29 03:07:10.466740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.571 [2024-11-29 03:07:10.466748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:54.571 [2024-11-29 03:07:10.466755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:54.571 [2024-11-29 03:07:10.466763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.571 [2024-11-29 03:07:10.466771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:54.571 [2024-11-29 03:07:10.466786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:54.571 [2024-11-29 03:07:10.466793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.571 [2024-11-29 03:07:10.466801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:54.571 [2024-11-29 03:07:10.466809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:54.571 [2024-11-29 03:07:10.466817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.571 [2024-11-29 03:07:10.466840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:54.571 [2024-11-29 03:07:10.466848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:54.571 [2024-11-29 03:07:10.466856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.571 [2024-11-29 03:07:10.466864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:54.571 [2024-11-29 03:07:10.466872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:54.571 [2024-11-29 03:07:10.466879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.571 [2024-11-29 03:07:10.466887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:54.571 [2024-11-29 03:07:10.466895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:54.571 [2024-11-29 03:07:10.466902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.571 [2024-11-29 03:07:10.466910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:54.571 [2024-11-29 03:07:10.466918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:54.571 [2024-11-29 03:07:10.466929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.571 [2024-11-29 03:07:10.466936] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:54.571 [2024-11-29 03:07:10.466947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:54.571 [2024-11-29 03:07:10.466956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.571 [2024-11-29 03:07:10.466968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.571 [2024-11-29 03:07:10.466978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:54.571 [2024-11-29 03:07:10.466987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:54.571 [2024-11-29 03:07:10.466995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:54.571 [2024-11-29 03:07:10.467002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:54.571 [2024-11-29 03:07:10.467009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:54.571 [2024-11-29 03:07:10.467016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:54.571 [2024-11-29 03:07:10.467024] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:54.571 [2024-11-29 03:07:10.467034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.571 [2024-11-29 03:07:10.467043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:54.571 [2024-11-29 03:07:10.467051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:54.571 [2024-11-29 03:07:10.467059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:54.571 [2024-11-29 03:07:10.467068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:54.571 [2024-11-29 03:07:10.467075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:54.571 [2024-11-29 03:07:10.467082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:54.571 [2024-11-29 03:07:10.467089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:54.571 [2024-11-29 03:07:10.467096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:54.571 [2024-11-29 03:07:10.467102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:54.571 [2024-11-29 03:07:10.467115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:54.571 [2024-11-29 03:07:10.467122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:54.572 [2024-11-29 03:07:10.467130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:54.572 [2024-11-29 03:07:10.467137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:54.572 [2024-11-29 03:07:10.467144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:54.572 [2024-11-29 03:07:10.467150] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:54.572 [2024-11-29 03:07:10.467158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.572 [2024-11-29 03:07:10.467166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:54.572 [2024-11-29 03:07:10.467174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:54.572 [2024-11-29 03:07:10.467181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:54.572 [2024-11-29 03:07:10.467192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:54.572 [2024-11-29 03:07:10.467201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.572 [2024-11-29 03:07:10.467211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:54.572 [2024-11-29 03:07:10.467219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:19:54.572 [2024-11-29 03:07:10.467230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.572 [2024-11-29 03:07:10.480900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.572 [2024-11-29 03:07:10.481100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.572 [2024-11-29 03:07:10.481120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.624 ms 00:19:54.572 [2024-11-29 03:07:10.481129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.572 [2024-11-29 03:07:10.481224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.572 [2024-11-29 03:07:10.481235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:54.572 [2024-11-29 03:07:10.481244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:54.572 [2024-11-29 03:07:10.481251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.572 [2024-11-29 03:07:10.502522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.572 [2024-11-29 03:07:10.502585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.572 [2024-11-29 03:07:10.502609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.211 ms 00:19:54.572 [2024-11-29 03:07:10.502618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.572 [2024-11-29 03:07:10.502670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.572 [2024-11-29 03:07:10.502687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.572 [2024-11-29 03:07:10.502698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:54.572 [2024-11-29 03:07:10.502714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.572 [2024-11-29 03:07:10.503358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.572 [2024-11-29 03:07:10.503403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.572 [2024-11-29 03:07:10.503417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:19:54.572 [2024-11-29 03:07:10.503429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.572 [2024-11-29 03:07:10.503612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.572 [2024-11-29 03:07:10.503625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.572 [2024-11-29 03:07:10.503637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:19:54.572 [2024-11-29 03:07:10.503646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.572 [2024-11-29 03:07:10.511436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.572 [2024-11-29 03:07:10.511485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.572 [2024-11-29 03:07:10.511496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.767 ms 00:19:54.572 [2024-11-29 03:07:10.511503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.572 [2024-11-29 03:07:10.515381] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:54.572 [2024-11-29 03:07:10.515434] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:54.572 [2024-11-29 03:07:10.515447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.572 [2024-11-29 03:07:10.515455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:54.572 [2024-11-29 03:07:10.515464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.849 ms 00:19:54.572 [2024-11-29 03:07:10.515471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.572 [2024-11-29 03:07:10.530973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.572 [2024-11-29 03:07:10.531038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:54.572 [2024-11-29 03:07:10.531049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.445 ms 00:19:54.572 [2024-11-29 03:07:10.531058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.572 [2024-11-29 03:07:10.533934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.572 [2024-11-29 03:07:10.534104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:54.572 [2024-11-29 03:07:10.534121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.824 ms 00:19:54.572 [2024-11-29 03:07:10.534129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.572 [2024-11-29 03:07:10.536777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.572 [2024-11-29 03:07:10.536849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:54.572 [2024-11-29 03:07:10.536860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:19:54.572 [2024-11-29 03:07:10.536867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.572 [2024-11-29 03:07:10.537209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.572 [2024-11-29 03:07:10.537223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:54.572 [2024-11-29 03:07:10.537232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:19:54.572 [2024-11-29 03:07:10.537241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.833 [2024-11-29 03:07:10.559656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.833 [2024-11-29 03:07:10.559725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:54.833 [2024-11-29 03:07:10.559738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.398 ms 00:19:54.833 [2024-11-29 03:07:10.559748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.833 [2024-11-29 03:07:10.567838] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:54.833 [2024-11-29 03:07:10.575395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.833 [2024-11-29 03:07:10.575530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:54.833 [2024-11-29 03:07:10.575568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.582 ms 00:19:54.833 [2024-11-29 03:07:10.575601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.833 [2024-11-29 03:07:10.575935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.833 [2024-11-29 03:07:10.575983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:54.833 [2024-11-29 03:07:10.576011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:54.833 [2024-11-29 03:07:10.576049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.833 [2024-11-29 03:07:10.576237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.833 [2024-11-29 03:07:10.576266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:54.833 [2024-11-29 03:07:10.576290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:54.833 [2024-11-29 03:07:10.576319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.833 [2024-11-29 03:07:10.576381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.833 [2024-11-29 03:07:10.576404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:54.833 [2024-11-29 03:07:10.576425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:54.833 [2024-11-29 03:07:10.576447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.833 [2024-11-29 03:07:10.576536] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:54.833 [2024-11-29 03:07:10.576579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.833 [2024-11-29 03:07:10.576602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:54.833 [2024-11-29 03:07:10.576624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:54.833 [2024-11-29 03:07:10.576651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.833 [2024-11-29 03:07:10.585090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.833 [2024-11-29 03:07:10.585150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:54.833 [2024-11-29 03:07:10.585165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.386 ms 00:19:54.833 [2024-11-29 03:07:10.585174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.833 [2024-11-29 03:07:10.585290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.833 [2024-11-29 03:07:10.585302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:54.833 [2024-11-29 03:07:10.585316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:54.833 [2024-11-29 03:07:10.585325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.833 [2024-11-29 03:07:10.586881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.911 ms, result 0 00:19:55.778  [2024-11-29T03:07:12.712Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-29T03:07:13.654Z] Copying: 34/1024 [MB] (16 MBps) [2024-11-29T03:07:15.029Z] Copying: 64/1024 [MB] (30 MBps) [2024-11-29T03:07:15.656Z] Copying: 118/1024 [MB] (53 MBps) [2024-11-29T03:07:16.625Z] Copying: 170/1024 [MB] (52 MBps) [2024-11-29T03:07:18.013Z] Copying: 191/1024 [MB] (21 MBps) [2024-11-29T03:07:18.959Z] Copying: 211/1024 [MB] (19 MBps) [2024-11-29T03:07:19.904Z] Copying: 232/1024 [MB] (21 MBps) [2024-11-29T03:07:20.849Z] Copying: 248/1024 [MB] (15 MBps) [2024-11-29T03:07:21.790Z] Copying: 262/1024 [MB] (13 MBps) [2024-11-29T03:07:22.731Z] Copying: 280/1024 [MB] (18 MBps) [2024-11-29T03:07:23.676Z] Copying: 294/1024 [MB] (14 MBps) [2024-11-29T03:07:24.622Z] Copying: 312/1024 [MB] (17 MBps) [2024-11-29T03:07:26.013Z] Copying: 325/1024 [MB] (13 MBps) [2024-11-29T03:07:26.957Z] Copying: 340/1024 [MB] (14 MBps) [2024-11-29T03:07:27.899Z] Copying: 357/1024 [MB] (16 MBps) [2024-11-29T03:07:28.841Z] Copying: 378/1024 [MB] (21 MBps) [2024-11-29T03:07:29.783Z] Copying: 392/1024 [MB] (13 MBps) [2024-11-29T03:07:30.727Z] Copying: 407/1024 [MB] (15 MBps) [2024-11-29T03:07:31.674Z] Copying: 426/1024 [MB] (19 MBps) [2024-11-29T03:07:32.620Z] Copying: 441/1024 [MB] (14 MBps) [2024-11-29T03:07:34.009Z] Copying: 457/1024 [MB] (16 MBps) [2024-11-29T03:07:34.956Z] Copying: 474/1024 [MB] (16 MBps) [2024-11-29T03:07:35.899Z] Copying: 490/1024 [MB] (15 MBps) [2024-11-29T03:07:36.844Z] Copying: 501/1024 [MB] (11 MBps) [2024-11-29T03:07:37.787Z] Copying: 512/1024 [MB] (11 MBps) [2024-11-29T03:07:38.730Z] Copying: 527/1024 [MB] (15 MBps) [2024-11-29T03:07:39.675Z] Copying: 538/1024 [MB] (11 MBps) [2024-11-29T03:07:40.617Z] Copying: 549/1024 [MB] (10 MBps) [2024-11-29T03:07:42.002Z] Copying: 573164/1048576 [kB] (10052 kBps) [2024-11-29T03:07:42.978Z] Copying: 570/1024 [MB] (10 MBps) [2024-11-29T03:07:43.919Z] Copying: 583/1024 [MB] (12 MBps) [2024-11-29T03:07:44.866Z] Copying: 594/1024 [MB] (10 MBps) [2024-11-29T03:07:45.808Z] Copying: 605/1024 [MB] (11 MBps) [2024-11-29T03:07:46.752Z] Copying: 619/1024 [MB] (14 MBps) [2024-11-29T03:07:47.799Z] Copying: 631/1024 [MB] (11 MBps) [2024-11-29T03:07:48.745Z] Copying: 642/1024 [MB] (11 MBps) [2024-11-29T03:07:49.688Z] Copying: 653/1024 [MB] (10 MBps) [2024-11-29T03:07:50.633Z] Copying: 663/1024 [MB] (10 MBps) [2024-11-29T03:07:52.020Z] Copying: 673/1024 [MB] (10 MBps) [2024-11-29T03:07:52.964Z] Copying: 683/1024 [MB] (10 MBps) [2024-11-29T03:07:53.910Z] Copying: 709912/1048576 [kB] (10076 kBps) [2024-11-29T03:07:54.855Z] Copying: 703/1024 [MB] (10 MBps) [2024-11-29T03:07:55.800Z] Copying: 730468/1048576 [kB] (10232 kBps) [2024-11-29T03:07:56.745Z] Copying: 723/1024 [MB] (10 MBps) [2024-11-29T03:07:57.686Z] Copying: 736/1024 [MB] (13 MBps) [2024-11-29T03:07:58.631Z] Copying: 764/1024 [MB] (28 MBps) [2024-11-29T03:07:59.600Z] Copying: 786/1024 [MB] (22 MBps) [2024-11-29T03:08:00.991Z] Copying: 799/1024 [MB] (12 MBps) [2024-11-29T03:08:01.941Z] Copying: 810/1024 [MB] (10 MBps) [2024-11-29T03:08:02.885Z] Copying: 823/1024 [MB] (13 MBps) [2024-11-29T03:08:03.832Z] Copying: 843/1024 [MB] (20 MBps) [2024-11-29T03:08:04.776Z] Copying: 863/1024 [MB] (19 MBps) [2024-11-29T03:08:05.729Z] Copying: 878/1024 [MB] (15 MBps) [2024-11-29T03:08:06.675Z] Copying: 896/1024 [MB] (17 MBps) [2024-11-29T03:08:07.618Z] Copying: 909/1024 [MB] (13 MBps) [2024-11-29T03:08:09.001Z] Copying: 920/1024 [MB] (10 MBps) [2024-11-29T03:08:09.946Z] Copying: 935/1024 [MB] (14 MBps) [2024-11-29T03:08:10.891Z] Copying: 948/1024 [MB] (12 MBps) [2024-11-29T03:08:11.837Z] Copying: 961/1024 [MB] (13 MBps) [2024-11-29T03:08:12.782Z] Copying: 972/1024 [MB] (10 MBps) [2024-11-29T03:08:13.729Z] Copying: 983/1024 [MB] (10 MBps) [2024-11-29T03:08:14.675Z] Copying: 993/1024 [MB] (10 MBps) [2024-11-29T03:08:15.622Z] Copying: 1004/1024 [MB] (10 MBps) [2024-11-29T03:08:16.567Z] Copying: 1014/1024 [MB] (10 MBps) [2024-11-29T03:08:16.567Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-29 03:08:16.307325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.575 [2024-11-29 03:08:16.307361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:00.575 [2024-11-29 03:08:16.307373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:00.575 [2024-11-29 03:08:16.307385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.575 [2024-11-29 03:08:16.307402] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:00.575 [2024-11-29 03:08:16.307816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.575 [2024-11-29 03:08:16.307842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:00.575 [2024-11-29 03:08:16.307851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:21:00.575 [2024-11-29 03:08:16.307862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.575 [2024-11-29 03:08:16.309101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.575 [2024-11-29 03:08:16.309131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:00.575 [2024-11-29 03:08:16.309139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:21:00.575 [2024-11-29 03:08:16.309145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.575 [2024-11-29 03:08:16.320276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.575 [2024-11-29 03:08:16.320390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:00.575 [2024-11-29 03:08:16.320404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.115 ms 00:21:00.575 [2024-11-29 03:08:16.320410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.575 [2024-11-29 03:08:16.325304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.575 [2024-11-29 03:08:16.325328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:00.575 [2024-11-29 03:08:16.325337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.870 ms 00:21:00.575 [2024-11-29 03:08:16.325344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.575 [2024-11-29 03:08:16.326307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.575 [2024-11-29 03:08:16.326334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:00.575 [2024-11-29 03:08:16.326342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:21:00.575 [2024-11-29 03:08:16.326348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.575 [2024-11-29 03:08:16.329319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.575 [2024-11-29 03:08:16.329346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:00.575 [2024-11-29 03:08:16.329353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.947 ms 00:21:00.575 [2024-11-29 03:08:16.329359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.575 [2024-11-29 03:08:16.329442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.575 [2024-11-29 03:08:16.329450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:00.575 [2024-11-29 03:08:16.329456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:00.575 [2024-11-29 03:08:16.329462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.575 [2024-11-29 03:08:16.331107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.575 [2024-11-29 03:08:16.331208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:00.575 [2024-11-29 03:08:16.331219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.628 ms 00:21:00.575 [2024-11-29 03:08:16.331225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.575 [2024-11-29 03:08:16.332485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.575 [2024-11-29 03:08:16.332511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:00.575 [2024-11-29 03:08:16.332518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:21:00.575 [2024-11-29 03:08:16.332523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.575 [2024-11-29 03:08:16.333340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.575 [2024-11-29 03:08:16.333366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:00.575 [2024-11-29 03:08:16.333373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:21:00.575 [2024-11-29 03:08:16.333378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.575 [2024-11-29 03:08:16.334311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.575 [2024-11-29 03:08:16.334403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:00.575 [2024-11-29 03:08:16.334414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:21:00.575 [2024-11-29 03:08:16.334419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.575 [2024-11-29 03:08:16.334441] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:00.575 [2024-11-29 03:08:16.334451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:00.575 [2024-11-29 03:08:16.334459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:00.575 [2024-11-29 03:08:16.334465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:00.575 [2024-11-29 03:08:16.334472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:00.575 [2024-11-29 03:08:16.334478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:00.575 [2024-11-29 03:08:16.334484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:00.575 [2024-11-29 03:08:16.334490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.334998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.335004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.335009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.335015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.335020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:00.576 [2024-11-29 03:08:16.335026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:00.577 [2024-11-29 03:08:16.335032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:00.577 [2024-11-29 03:08:16.335038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:00.577 [2024-11-29 03:08:16.335044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:00.577 [2024-11-29 03:08:16.335050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:00.577 [2024-11-29 03:08:16.335062] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:00.577 [2024-11-29 03:08:16.335068] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4399c0a4-156c-4cb4-889f-ab275081974c 00:21:00.577 [2024-11-29 03:08:16.335074] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:00.577 [2024-11-29 03:08:16.335080] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:00.577 [2024-11-29 03:08:16.335086] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:00.577 [2024-11-29 03:08:16.335096] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:00.577 [2024-11-29 03:08:16.335103] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:00.577 [2024-11-29 03:08:16.335109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:00.577 [2024-11-29 03:08:16.335115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:00.577 [2024-11-29 03:08:16.335121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:00.577 [2024-11-29 03:08:16.335126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:00.577 [2024-11-29 03:08:16.335132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.577 [2024-11-29 03:08:16.335139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:00.577 [2024-11-29 03:08:16.335148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:21:00.577 [2024-11-29 03:08:16.335154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.577 [2024-11-29 03:08:16.336377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.577 [2024-11-29 03:08:16.336391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:00.577 [2024-11-29 03:08:16.336397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:21:00.577 [2024-11-29 03:08:16.336403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.577 [2024-11-29 03:08:16.336476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.577 [2024-11-29 03:08:16.336482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:00.577 [2024-11-29 03:08:16.336489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:00.577 [2024-11-29 03:08:16.336495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.577 [2024-11-29 03:08:16.340660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.577 [2024-11-29 03:08:16.340688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.577 [2024-11-29 03:08:16.340695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.577 [2024-11-29 03:08:16.340701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.577 [2024-11-29 03:08:16.340745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.577 [2024-11-29 03:08:16.340752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.577 [2024-11-29 03:08:16.340758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.577 [2024-11-29 03:08:16.340764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.577 [2024-11-29 03:08:16.340793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.577 [2024-11-29 03:08:16.340800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.577 [2024-11-29 03:08:16.340806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.577 [2024-11-29 03:08:16.340812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.577 [2024-11-29 03:08:16.340824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.577 [2024-11-29 03:08:16.340847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.577 [2024-11-29 03:08:16.340853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.577 [2024-11-29 03:08:16.340859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.577 [2024-11-29 03:08:16.348307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.577 [2024-11-29 03:08:16.348340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.577 [2024-11-29 03:08:16.348347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.577 [2024-11-29 03:08:16.348353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.577 [2024-11-29 03:08:16.354587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.577 [2024-11-29 03:08:16.354622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.577 [2024-11-29 03:08:16.354630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.577 [2024-11-29 03:08:16.354636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.577 [2024-11-29 03:08:16.354692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.577 [2024-11-29 03:08:16.354700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.577 [2024-11-29 03:08:16.354706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.577 [2024-11-29 03:08:16.354712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.577 [2024-11-29 03:08:16.354732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.577 [2024-11-29 03:08:16.354738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.577 [2024-11-29 03:08:16.354746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.577 [2024-11-29 03:08:16.354752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.577 [2024-11-29 03:08:16.354800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.577 [2024-11-29 03:08:16.354811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.577 [2024-11-29 03:08:16.354818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.577 [2024-11-29 03:08:16.354823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.577 [2024-11-29 03:08:16.354863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.577 [2024-11-29 03:08:16.354871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:00.577 [2024-11-29 03:08:16.354877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.577 [2024-11-29 03:08:16.354885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.577 [2024-11-29 03:08:16.354915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.577 [2024-11-29 03:08:16.354922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.577 [2024-11-29 03:08:16.354928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.577 [2024-11-29 03:08:16.354934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.577 [2024-11-29 03:08:16.354965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.577 [2024-11-29 03:08:16.354973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.577 [2024-11-29 03:08:16.354981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.577 [2024-11-29 03:08:16.354986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.577 [2024-11-29 03:08:16.355083] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.731 ms, result 0 00:21:00.836 00:21:00.836 00:21:00.836 03:08:16 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:00.836 [2024-11-29 03:08:16.682751] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:21:00.836 [2024-11-29 03:08:16.682889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89028 ] 00:21:00.836 [2024-11-29 03:08:16.823346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.096 [2024-11-29 03:08:16.845175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.096 [2024-11-29 03:08:16.930649] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:01.096 [2024-11-29 03:08:16.930710] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:01.096 [2024-11-29 03:08:17.072658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.096 [2024-11-29 03:08:17.072789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:01.096 [2024-11-29 03:08:17.072805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:01.096 [2024-11-29 03:08:17.072816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.096 [2024-11-29 03:08:17.072869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.096 [2024-11-29 03:08:17.072877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:01.096 [2024-11-29 03:08:17.072884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:01.096 [2024-11-29 03:08:17.072894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.096 [2024-11-29 03:08:17.072913] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:01.096 [2024-11-29 03:08:17.073097] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:01.096 [2024-11-29 03:08:17.073111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.096 [2024-11-29 03:08:17.073118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:01.096 [2024-11-29 03:08:17.073126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:21:01.096 [2024-11-29 03:08:17.073132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.096 [2024-11-29 03:08:17.074124] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:01.096 [2024-11-29 03:08:17.075988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.096 [2024-11-29 03:08:17.076020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:01.096 [2024-11-29 03:08:17.076028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.864 ms 00:21:01.096 [2024-11-29 03:08:17.076037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.096 [2024-11-29 03:08:17.076079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.096 [2024-11-29 03:08:17.076089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:01.096 [2024-11-29 03:08:17.076097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:01.096 [2024-11-29 03:08:17.076105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.096 [2024-11-29 03:08:17.080450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.096 [2024-11-29 03:08:17.080474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:01.096 [2024-11-29 03:08:17.080486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.318 ms 00:21:01.096 [2024-11-29 03:08:17.080491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.096 [2024-11-29 03:08:17.080551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.096 [2024-11-29 03:08:17.080558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:01.096 [2024-11-29 03:08:17.080564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:01.096 [2024-11-29 03:08:17.080569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.096 [2024-11-29 03:08:17.080610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.096 [2024-11-29 03:08:17.080617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:01.096 [2024-11-29 03:08:17.080623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:01.096 [2024-11-29 03:08:17.080630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.096 [2024-11-29 03:08:17.080646] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:01.096 [2024-11-29 03:08:17.081843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.096 [2024-11-29 03:08:17.081867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:01.096 [2024-11-29 03:08:17.081874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.201 ms 00:21:01.096 [2024-11-29 03:08:17.081880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.096 [2024-11-29 03:08:17.081901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.096 [2024-11-29 03:08:17.081907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:01.096 [2024-11-29 03:08:17.081916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:01.096 [2024-11-29 03:08:17.081924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.096 [2024-11-29 03:08:17.081938] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:01.096 [2024-11-29 03:08:17.081955] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:01.096 [2024-11-29 03:08:17.081984] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:01.096 [2024-11-29 03:08:17.081995] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:01.096 [2024-11-29 03:08:17.082073] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:01.096 [2024-11-29 03:08:17.082080] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:01.096 [2024-11-29 03:08:17.082090] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:01.096 [2024-11-29 03:08:17.082098] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:01.096 [2024-11-29 03:08:17.082104] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:01.096 [2024-11-29 03:08:17.082110] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:01.096 [2024-11-29 03:08:17.082116] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:01.096 [2024-11-29 03:08:17.082121] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:01.096 [2024-11-29 03:08:17.082126] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:01.096 [2024-11-29 03:08:17.082132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.096 [2024-11-29 03:08:17.082143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:01.096 [2024-11-29 03:08:17.082149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:21:01.096 [2024-11-29 03:08:17.082154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.096 [2024-11-29 03:08:17.082219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.096 [2024-11-29 03:08:17.082225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:01.096 [2024-11-29 03:08:17.082230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:01.096 [2024-11-29 03:08:17.082236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.096 [2024-11-29 03:08:17.082310] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:01.096 [2024-11-29 03:08:17.082317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:01.096 [2024-11-29 03:08:17.082323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:01.096 [2024-11-29 03:08:17.082329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.096 [2024-11-29 03:08:17.082335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:01.096 [2024-11-29 03:08:17.082339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:01.096 [2024-11-29 03:08:17.082345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:01.096 [2024-11-29 03:08:17.082351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:01.096 [2024-11-29 03:08:17.082356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:01.096 [2024-11-29 03:08:17.082361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:01.096 [2024-11-29 03:08:17.082367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:01.096 [2024-11-29 03:08:17.082371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:01.096 [2024-11-29 03:08:17.082379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:01.096 [2024-11-29 03:08:17.082385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:01.096 [2024-11-29 03:08:17.082390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:01.096 [2024-11-29 03:08:17.082395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.096 [2024-11-29 03:08:17.082400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:01.096 [2024-11-29 03:08:17.082405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:01.096 [2024-11-29 03:08:17.082410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.096 [2024-11-29 03:08:17.082416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:01.096 [2024-11-29 03:08:17.082421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:01.096 [2024-11-29 03:08:17.082426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.096 [2024-11-29 03:08:17.082431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:01.096 [2024-11-29 03:08:17.082436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:01.096 [2024-11-29 03:08:17.082440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.096 [2024-11-29 03:08:17.082445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:01.097 [2024-11-29 03:08:17.082450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:01.097 [2024-11-29 03:08:17.082455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.097 [2024-11-29 03:08:17.082463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:01.097 [2024-11-29 03:08:17.082468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:01.097 [2024-11-29 03:08:17.082473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.097 [2024-11-29 03:08:17.082478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:01.097 [2024-11-29 03:08:17.082484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:01.097 [2024-11-29 03:08:17.082490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:01.097 [2024-11-29 03:08:17.082496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:01.097 [2024-11-29 03:08:17.082502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:01.097 [2024-11-29 03:08:17.082507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:01.097 [2024-11-29 03:08:17.082513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:01.097 [2024-11-29 03:08:17.082519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:01.097 [2024-11-29 03:08:17.082525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.097 [2024-11-29 03:08:17.082530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:01.097 [2024-11-29 03:08:17.082536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:01.097 [2024-11-29 03:08:17.082542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.097 [2024-11-29 03:08:17.082548] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:01.097 [2024-11-29 03:08:17.082559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:01.097 [2024-11-29 03:08:17.082567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:01.097 [2024-11-29 03:08:17.082573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.097 [2024-11-29 03:08:17.082582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:01.097 [2024-11-29 03:08:17.082588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:01.097 [2024-11-29 03:08:17.082594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:01.097 [2024-11-29 03:08:17.082600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:01.097 [2024-11-29 03:08:17.082606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:01.097 [2024-11-29 03:08:17.082612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:01.097 [2024-11-29 03:08:17.082619] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:01.097 [2024-11-29 03:08:17.082626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:01.097 [2024-11-29 03:08:17.082633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:01.097 [2024-11-29 03:08:17.082640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:01.097 [2024-11-29 03:08:17.082646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:01.097 [2024-11-29 03:08:17.082652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:01.097 [2024-11-29 03:08:17.082658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:01.097 [2024-11-29 03:08:17.082665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:01.097 [2024-11-29 03:08:17.082672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:01.097 [2024-11-29 03:08:17.082678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:01.097 [2024-11-29 03:08:17.082684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:01.097 [2024-11-29 03:08:17.082694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:01.097 [2024-11-29 03:08:17.082700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:01.097 [2024-11-29 03:08:17.082706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:01.097 [2024-11-29 03:08:17.082712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:01.097 [2024-11-29 03:08:17.082719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:01.097 [2024-11-29 03:08:17.082725] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:01.097 [2024-11-29 03:08:17.082731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:01.097 [2024-11-29 03:08:17.082739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:01.097 [2024-11-29 03:08:17.082745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:01.097 [2024-11-29 03:08:17.082751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:01.097 [2024-11-29 03:08:17.082758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:01.097 [2024-11-29 03:08:17.082764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.097 [2024-11-29 03:08:17.082772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:01.097 [2024-11-29 03:08:17.082779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:21:01.097 [2024-11-29 03:08:17.082787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.090527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.090555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:01.356 [2024-11-29 03:08:17.090565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.706 ms 00:21:01.356 [2024-11-29 03:08:17.090570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.090630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.090636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:01.356 [2024-11-29 03:08:17.090646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:01.356 [2024-11-29 03:08:17.090652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.106382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.106421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:01.356 [2024-11-29 03:08:17.106436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.693 ms 00:21:01.356 [2024-11-29 03:08:17.106444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.106480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.106489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:01.356 [2024-11-29 03:08:17.106497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:01.356 [2024-11-29 03:08:17.106504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.106860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.106875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:01.356 [2024-11-29 03:08:17.106884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:21:01.356 [2024-11-29 03:08:17.106892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.107010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.107023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:01.356 [2024-11-29 03:08:17.107035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:21:01.356 [2024-11-29 03:08:17.107043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.111973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.112005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:01.356 [2024-11-29 03:08:17.112015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.910 ms 00:21:01.356 [2024-11-29 03:08:17.112022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.114339] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:01.356 [2024-11-29 03:08:17.114380] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:01.356 [2024-11-29 03:08:17.114397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.114405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:01.356 [2024-11-29 03:08:17.114414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.291 ms 00:21:01.356 [2024-11-29 03:08:17.114422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.127566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.127592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:01.356 [2024-11-29 03:08:17.127608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.103 ms 00:21:01.356 [2024-11-29 03:08:17.127614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.129241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.129359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:01.356 [2024-11-29 03:08:17.129371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:21:01.356 [2024-11-29 03:08:17.129376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.130623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.130649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:01.356 [2024-11-29 03:08:17.130656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.223 ms 00:21:01.356 [2024-11-29 03:08:17.130662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.130907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.130916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:01.356 [2024-11-29 03:08:17.130924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:21:01.356 [2024-11-29 03:08:17.130935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.144481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.144608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:01.356 [2024-11-29 03:08:17.144622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.530 ms 00:21:01.356 [2024-11-29 03:08:17.144628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.150320] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:01.356 [2024-11-29 03:08:17.152143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.152171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:01.356 [2024-11-29 03:08:17.152178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.489 ms 00:21:01.356 [2024-11-29 03:08:17.152185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.152228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.152237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:01.356 [2024-11-29 03:08:17.152249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:01.356 [2024-11-29 03:08:17.152255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.152304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.152311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:01.356 [2024-11-29 03:08:17.152320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:01.356 [2024-11-29 03:08:17.152325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.152341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.152347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:01.356 [2024-11-29 03:08:17.152353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:01.356 [2024-11-29 03:08:17.152358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.152383] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:01.356 [2024-11-29 03:08:17.152390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.152399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:01.356 [2024-11-29 03:08:17.152406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:01.356 [2024-11-29 03:08:17.152413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.155409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.155525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:01.356 [2024-11-29 03:08:17.155538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.982 ms 00:21:01.356 [2024-11-29 03:08:17.155545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.155600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.356 [2024-11-29 03:08:17.155608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:01.356 [2024-11-29 03:08:17.155615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:01.356 [2024-11-29 03:08:17.155621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.356 [2024-11-29 03:08:17.156365] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 83.393 ms, result 0 00:21:02.301  [2024-11-29T03:08:19.317Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-29T03:08:20.703Z] Copying: 38/1024 [MB] (16 MBps) [2024-11-29T03:08:21.646Z] Copying: 59/1024 [MB] (21 MBps) [2024-11-29T03:08:22.588Z] Copying: 85/1024 [MB] (25 MBps) [2024-11-29T03:08:23.530Z] Copying: 99/1024 [MB] (13 MBps) [2024-11-29T03:08:24.473Z] Copying: 121/1024 [MB] (21 MBps) [2024-11-29T03:08:25.413Z] Copying: 137/1024 [MB] (16 MBps) [2024-11-29T03:08:26.354Z] Copying: 159/1024 [MB] (22 MBps) [2024-11-29T03:08:27.296Z] Copying: 178/1024 [MB] (18 MBps) [2024-11-29T03:08:28.682Z] Copying: 193/1024 [MB] (15 MBps) [2024-11-29T03:08:29.628Z] Copying: 219/1024 [MB] (26 MBps) [2024-11-29T03:08:30.573Z] Copying: 240/1024 [MB] (20 MBps) [2024-11-29T03:08:31.518Z] Copying: 258/1024 [MB] (18 MBps) [2024-11-29T03:08:32.459Z] Copying: 280/1024 [MB] (21 MBps) [2024-11-29T03:08:33.406Z] Copying: 297/1024 [MB] (16 MBps) [2024-11-29T03:08:34.350Z] Copying: 318/1024 [MB] (21 MBps) [2024-11-29T03:08:35.293Z] Copying: 335/1024 [MB] (17 MBps) [2024-11-29T03:08:36.695Z] Copying: 358/1024 [MB] (22 MBps) [2024-11-29T03:08:37.640Z] Copying: 376/1024 [MB] (18 MBps) [2024-11-29T03:08:38.583Z] Copying: 392/1024 [MB] (15 MBps) [2024-11-29T03:08:39.528Z] Copying: 402/1024 [MB] (10 MBps) [2024-11-29T03:08:40.474Z] Copying: 412/1024 [MB] (10 MBps) [2024-11-29T03:08:41.419Z] Copying: 423/1024 [MB] (10 MBps) [2024-11-29T03:08:42.360Z] Copying: 433/1024 [MB] (10 MBps) [2024-11-29T03:08:43.304Z] Copying: 456/1024 [MB] (22 MBps) [2024-11-29T03:08:44.692Z] Copying: 471/1024 [MB] (15 MBps) [2024-11-29T03:08:45.638Z] Copying: 490/1024 [MB] (19 MBps) [2024-11-29T03:08:46.583Z] Copying: 503/1024 [MB] (12 MBps) [2024-11-29T03:08:47.527Z] Copying: 517/1024 [MB] (13 MBps) [2024-11-29T03:08:48.472Z] Copying: 538/1024 [MB] (21 MBps) [2024-11-29T03:08:49.417Z] Copying: 557/1024 [MB] (18 MBps) [2024-11-29T03:08:50.385Z] Copying: 578/1024 [MB] (20 MBps) [2024-11-29T03:08:51.369Z] Copying: 595/1024 [MB] (17 MBps) [2024-11-29T03:08:52.310Z] Copying: 614/1024 [MB] (19 MBps) [2024-11-29T03:08:53.694Z] Copying: 632/1024 [MB] (17 MBps) [2024-11-29T03:08:54.636Z] Copying: 650/1024 [MB] (17 MBps) [2024-11-29T03:08:55.579Z] Copying: 669/1024 [MB] (19 MBps) [2024-11-29T03:08:56.523Z] Copying: 692/1024 [MB] (22 MBps) [2024-11-29T03:08:57.464Z] Copying: 711/1024 [MB] (18 MBps) [2024-11-29T03:08:58.408Z] Copying: 731/1024 [MB] (20 MBps) [2024-11-29T03:08:59.353Z] Copying: 755/1024 [MB] (23 MBps) [2024-11-29T03:09:00.298Z] Copying: 772/1024 [MB] (17 MBps) [2024-11-29T03:09:01.683Z] Copying: 788/1024 [MB] (16 MBps) [2024-11-29T03:09:02.627Z] Copying: 806/1024 [MB] (17 MBps) [2024-11-29T03:09:03.572Z] Copying: 817/1024 [MB] (10 MBps) [2024-11-29T03:09:04.515Z] Copying: 828/1024 [MB] (10 MBps) [2024-11-29T03:09:05.457Z] Copying: 839/1024 [MB] (10 MBps) [2024-11-29T03:09:06.401Z] Copying: 857/1024 [MB] (17 MBps) [2024-11-29T03:09:07.347Z] Copying: 872/1024 [MB] (14 MBps) [2024-11-29T03:09:08.294Z] Copying: 888/1024 [MB] (16 MBps) [2024-11-29T03:09:09.683Z] Copying: 905/1024 [MB] (16 MBps) [2024-11-29T03:09:10.628Z] Copying: 917/1024 [MB] (12 MBps) [2024-11-29T03:09:11.572Z] Copying: 939/1024 [MB] (21 MBps) [2024-11-29T03:09:12.517Z] Copying: 964/1024 [MB] (25 MBps) [2024-11-29T03:09:13.462Z] Copying: 975/1024 [MB] (11 MBps) [2024-11-29T03:09:14.406Z] Copying: 989/1024 [MB] (13 MBps) [2024-11-29T03:09:15.349Z] Copying: 1006/1024 [MB] (16 MBps) [2024-11-29T03:09:15.610Z] Copying: 1020/1024 [MB] (13 MBps) [2024-11-29T03:09:15.872Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-29 03:09:15.846617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.880 [2024-11-29 03:09:15.846693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:59.880 [2024-11-29 03:09:15.846714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:59.880 [2024-11-29 03:09:15.846723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.880 [2024-11-29 03:09:15.846750] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:59.880 [2024-11-29 03:09:15.847545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.880 [2024-11-29 03:09:15.847580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:59.880 [2024-11-29 03:09:15.847601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:21:59.880 [2024-11-29 03:09:15.847611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.880 [2024-11-29 03:09:15.847872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.880 [2024-11-29 03:09:15.847945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:59.880 [2024-11-29 03:09:15.847958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:21:59.880 [2024-11-29 03:09:15.847977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.880 [2024-11-29 03:09:15.851434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.880 [2024-11-29 03:09:15.851580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:59.880 [2024-11-29 03:09:15.851597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.440 ms 00:21:59.880 [2024-11-29 03:09:15.851614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.881 [2024-11-29 03:09:15.859162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.881 [2024-11-29 03:09:15.859196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:59.881 [2024-11-29 03:09:15.859207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.523 ms 00:21:59.881 [2024-11-29 03:09:15.859215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.881 [2024-11-29 03:09:15.863186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.881 [2024-11-29 03:09:15.863338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:59.881 [2024-11-29 03:09:15.863400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.896 ms 00:21:59.881 [2024-11-29 03:09:15.863424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.881 [2024-11-29 03:09:15.868619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.881 [2024-11-29 03:09:15.868806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:59.881 [2024-11-29 03:09:15.868894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.146 ms 00:21:59.881 [2024-11-29 03:09:15.869034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.881 [2024-11-29 03:09:15.869160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.881 [2024-11-29 03:09:15.869288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:59.881 [2024-11-29 03:09:15.869380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:21:59.881 [2024-11-29 03:09:15.869398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.144 [2024-11-29 03:09:15.872475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.144 [2024-11-29 03:09:15.872617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:00.144 [2024-11-29 03:09:15.872634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.059 ms 00:22:00.144 [2024-11-29 03:09:15.872642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.144 [2024-11-29 03:09:15.875374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.144 [2024-11-29 03:09:15.875408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:00.144 [2024-11-29 03:09:15.875420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.698 ms 00:22:00.144 [2024-11-29 03:09:15.875427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.144 [2024-11-29 03:09:15.877734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.144 [2024-11-29 03:09:15.877878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:00.144 [2024-11-29 03:09:15.877932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.264 ms 00:22:00.144 [2024-11-29 03:09:15.877955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.144 [2024-11-29 03:09:15.880232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.144 [2024-11-29 03:09:15.880370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:00.144 [2024-11-29 03:09:15.880426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.116 ms 00:22:00.144 [2024-11-29 03:09:15.880448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.144 [2024-11-29 03:09:15.880569] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:00.144 [2024-11-29 03:09:15.880681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.880820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:00.144 [2024-11-29 03:09:15.881709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.881992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:00.145 [2024-11-29 03:09:15.882527] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:00.145 [2024-11-29 03:09:15.882535] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4399c0a4-156c-4cb4-889f-ab275081974c 00:22:00.145 [2024-11-29 03:09:15.882544] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:00.145 [2024-11-29 03:09:15.882551] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:00.145 [2024-11-29 03:09:15.882558] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:00.145 [2024-11-29 03:09:15.882567] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:00.145 [2024-11-29 03:09:15.882574] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:00.145 [2024-11-29 03:09:15.882582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:00.145 [2024-11-29 03:09:15.882593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:00.145 [2024-11-29 03:09:15.882599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:00.145 [2024-11-29 03:09:15.882608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:00.146 [2024-11-29 03:09:15.882617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.146 [2024-11-29 03:09:15.882633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:00.146 [2024-11-29 03:09:15.882642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.051 ms 00:22:00.146 [2024-11-29 03:09:15.882650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.146 [2024-11-29 03:09:15.885178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.146 [2024-11-29 03:09:15.885228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:00.146 [2024-11-29 03:09:15.885253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.506 ms 00:22:00.146 [2024-11-29 03:09:15.885273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.146 [2024-11-29 03:09:15.885483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.146 [2024-11-29 03:09:15.885558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:00.146 [2024-11-29 03:09:15.885608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:00.146 [2024-11-29 03:09:15.885652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.146 [2024-11-29 03:09:15.893888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.146 [2024-11-29 03:09:15.894027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.146 [2024-11-29 03:09:15.894079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.146 [2024-11-29 03:09:15.894104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.146 [2024-11-29 03:09:15.894197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.146 [2024-11-29 03:09:15.894219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.146 [2024-11-29 03:09:15.894239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.146 [2024-11-29 03:09:15.894258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.146 [2024-11-29 03:09:15.894549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.146 [2024-11-29 03:09:15.894594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.146 [2024-11-29 03:09:15.894615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.146 [2024-11-29 03:09:15.894634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.146 [2024-11-29 03:09:15.894673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.146 [2024-11-29 03:09:15.894771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.146 [2024-11-29 03:09:15.894803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.146 [2024-11-29 03:09:15.894822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.146 [2024-11-29 03:09:15.908743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.146 [2024-11-29 03:09:15.908914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.146 [2024-11-29 03:09:15.908982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.146 [2024-11-29 03:09:15.909011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.146 [2024-11-29 03:09:15.920143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.146 [2024-11-29 03:09:15.920295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.146 [2024-11-29 03:09:15.920351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.146 [2024-11-29 03:09:15.920373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.146 [2024-11-29 03:09:15.920437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.146 [2024-11-29 03:09:15.920460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:00.146 [2024-11-29 03:09:15.920480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.146 [2024-11-29 03:09:15.920500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.146 [2024-11-29 03:09:15.920549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.146 [2024-11-29 03:09:15.920578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:00.146 [2024-11-29 03:09:15.920599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.146 [2024-11-29 03:09:15.920650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.146 [2024-11-29 03:09:15.920745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.146 [2024-11-29 03:09:15.920769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:00.146 [2024-11-29 03:09:15.920844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.146 [2024-11-29 03:09:15.920869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.146 [2024-11-29 03:09:15.920927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.146 [2024-11-29 03:09:15.921283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:00.146 [2024-11-29 03:09:15.921349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.146 [2024-11-29 03:09:15.921372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.146 [2024-11-29 03:09:15.921590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.146 [2024-11-29 03:09:15.921635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:00.146 [2024-11-29 03:09:15.921657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.146 [2024-11-29 03:09:15.921676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.146 [2024-11-29 03:09:15.921754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.146 [2024-11-29 03:09:15.921864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:00.146 [2024-11-29 03:09:15.921888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.146 [2024-11-29 03:09:15.921919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.146 [2024-11-29 03:09:15.922072] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 75.419 ms, result 0 00:22:00.408 00:22:00.408 00:22:00.408 03:09:16 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:02.962 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:02.962 03:09:18 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:02.962 [2024-11-29 03:09:18.468949] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:22:02.962 [2024-11-29 03:09:18.469088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89667 ] 00:22:02.962 [2024-11-29 03:09:18.615389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.962 [2024-11-29 03:09:18.644032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.962 [2024-11-29 03:09:18.760579] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.962 [2024-11-29 03:09:18.760673] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.962 [2024-11-29 03:09:18.921189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.962 [2024-11-29 03:09:18.921248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:02.962 [2024-11-29 03:09:18.921263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:02.962 [2024-11-29 03:09:18.921271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.962 [2024-11-29 03:09:18.921336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.962 [2024-11-29 03:09:18.921346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.962 [2024-11-29 03:09:18.921356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:02.962 [2024-11-29 03:09:18.921372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.962 [2024-11-29 03:09:18.921402] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:02.962 [2024-11-29 03:09:18.921666] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:02.962 [2024-11-29 03:09:18.921685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.962 [2024-11-29 03:09:18.921719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.962 [2024-11-29 03:09:18.921735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:22:02.962 [2024-11-29 03:09:18.921743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.962 [2024-11-29 03:09:18.923546] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:02.962 [2024-11-29 03:09:18.927308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.962 [2024-11-29 03:09:18.927358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:02.962 [2024-11-29 03:09:18.927369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.766 ms 00:22:02.962 [2024-11-29 03:09:18.927385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.962 [2024-11-29 03:09:18.927454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.962 [2024-11-29 03:09:18.927466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:02.962 [2024-11-29 03:09:18.927476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:02.962 [2024-11-29 03:09:18.927484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.962 [2024-11-29 03:09:18.935431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.962 [2024-11-29 03:09:18.935474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.962 [2024-11-29 03:09:18.935488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.906 ms 00:22:02.962 [2024-11-29 03:09:18.935496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.962 [2024-11-29 03:09:18.935587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.962 [2024-11-29 03:09:18.935597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.962 [2024-11-29 03:09:18.935611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:22:02.962 [2024-11-29 03:09:18.935621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.962 [2024-11-29 03:09:18.935678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.962 [2024-11-29 03:09:18.935688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:02.962 [2024-11-29 03:09:18.935702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:02.962 [2024-11-29 03:09:18.935715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.962 [2024-11-29 03:09:18.935735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.962 [2024-11-29 03:09:18.937822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.962 [2024-11-29 03:09:18.937875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.962 [2024-11-29 03:09:18.937885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.092 ms 00:22:02.962 [2024-11-29 03:09:18.937893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.962 [2024-11-29 03:09:18.937927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.962 [2024-11-29 03:09:18.937940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:02.962 [2024-11-29 03:09:18.937948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:02.962 [2024-11-29 03:09:18.937959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.962 [2024-11-29 03:09:18.937986] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:02.962 [2024-11-29 03:09:18.938007] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:02.962 [2024-11-29 03:09:18.938048] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:02.962 [2024-11-29 03:09:18.938068] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:02.962 [2024-11-29 03:09:18.938173] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:02.962 [2024-11-29 03:09:18.938185] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:02.962 [2024-11-29 03:09:18.938199] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:02.962 [2024-11-29 03:09:18.938209] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:02.962 [2024-11-29 03:09:18.938218] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:02.962 [2024-11-29 03:09:18.938226] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:02.962 [2024-11-29 03:09:18.938236] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:02.962 [2024-11-29 03:09:18.938244] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:02.962 [2024-11-29 03:09:18.938252] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:02.962 [2024-11-29 03:09:18.938260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.962 [2024-11-29 03:09:18.938267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:02.962 [2024-11-29 03:09:18.938275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:22:02.962 [2024-11-29 03:09:18.938284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.962 [2024-11-29 03:09:18.938368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.962 [2024-11-29 03:09:18.938381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:02.962 [2024-11-29 03:09:18.938388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:02.962 [2024-11-29 03:09:18.938395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.962 [2024-11-29 03:09:18.938497] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:02.962 [2024-11-29 03:09:18.938509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:02.962 [2024-11-29 03:09:18.938518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.962 [2024-11-29 03:09:18.938527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.962 [2024-11-29 03:09:18.938539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:02.962 [2024-11-29 03:09:18.938548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:02.962 [2024-11-29 03:09:18.938556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:02.962 [2024-11-29 03:09:18.938565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:02.962 [2024-11-29 03:09:18.938573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:02.962 [2024-11-29 03:09:18.938581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.962 [2024-11-29 03:09:18.938591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:02.962 [2024-11-29 03:09:18.938600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:02.962 [2024-11-29 03:09:18.938607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.962 [2024-11-29 03:09:18.938615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:02.962 [2024-11-29 03:09:18.938622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:02.962 [2024-11-29 03:09:18.938630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.962 [2024-11-29 03:09:18.938642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:02.962 [2024-11-29 03:09:18.938650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:02.962 [2024-11-29 03:09:18.938658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.962 [2024-11-29 03:09:18.938665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:02.962 [2024-11-29 03:09:18.938673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:02.962 [2024-11-29 03:09:18.938680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.962 [2024-11-29 03:09:18.938688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:02.962 [2024-11-29 03:09:18.938696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:02.963 [2024-11-29 03:09:18.938703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.963 [2024-11-29 03:09:18.938711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:02.963 [2024-11-29 03:09:18.938723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:02.963 [2024-11-29 03:09:18.938731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.963 [2024-11-29 03:09:18.938739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:02.963 [2024-11-29 03:09:18.938746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:02.963 [2024-11-29 03:09:18.938753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.963 [2024-11-29 03:09:18.938761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:02.963 [2024-11-29 03:09:18.938769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:02.963 [2024-11-29 03:09:18.938776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.963 [2024-11-29 03:09:18.938783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:02.963 [2024-11-29 03:09:18.938790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:02.963 [2024-11-29 03:09:18.938798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.963 [2024-11-29 03:09:18.938806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:02.963 [2024-11-29 03:09:18.938815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:02.963 [2024-11-29 03:09:18.938823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.963 [2024-11-29 03:09:18.938855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:02.963 [2024-11-29 03:09:18.938861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:02.963 [2024-11-29 03:09:18.938871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.963 [2024-11-29 03:09:18.938878] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:02.963 [2024-11-29 03:09:18.938889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:02.963 [2024-11-29 03:09:18.938900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.963 [2024-11-29 03:09:18.938907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.963 [2024-11-29 03:09:18.938914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:02.963 [2024-11-29 03:09:18.938923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:02.963 [2024-11-29 03:09:18.938930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:02.963 [2024-11-29 03:09:18.938938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:02.963 [2024-11-29 03:09:18.938945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:02.963 [2024-11-29 03:09:18.938952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:02.963 [2024-11-29 03:09:18.938961] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:02.963 [2024-11-29 03:09:18.938970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.963 [2024-11-29 03:09:18.938983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:02.963 [2024-11-29 03:09:18.938992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:02.963 [2024-11-29 03:09:18.938999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:02.963 [2024-11-29 03:09:18.939008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:02.963 [2024-11-29 03:09:18.939016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:02.963 [2024-11-29 03:09:18.939023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:02.963 [2024-11-29 03:09:18.939030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:02.963 [2024-11-29 03:09:18.939038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:02.963 [2024-11-29 03:09:18.939044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:02.963 [2024-11-29 03:09:18.939057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:02.963 [2024-11-29 03:09:18.939064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:02.963 [2024-11-29 03:09:18.939070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:02.963 [2024-11-29 03:09:18.939078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:02.963 [2024-11-29 03:09:18.939085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:02.963 [2024-11-29 03:09:18.939092] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:02.963 [2024-11-29 03:09:18.939100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.963 [2024-11-29 03:09:18.939109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:02.963 [2024-11-29 03:09:18.939116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:02.963 [2024-11-29 03:09:18.939123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:02.963 [2024-11-29 03:09:18.939132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:02.963 [2024-11-29 03:09:18.939140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.963 [2024-11-29 03:09:18.939147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:02.963 [2024-11-29 03:09:18.939155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:22:02.963 [2024-11-29 03:09:18.939165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.224 [2024-11-29 03:09:18.953341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.224 [2024-11-29 03:09:18.953549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:03.224 [2024-11-29 03:09:18.953569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.130 ms 00:22:03.224 [2024-11-29 03:09:18.953578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.224 [2024-11-29 03:09:18.953679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.224 [2024-11-29 03:09:18.953687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:03.224 [2024-11-29 03:09:18.953708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:03.224 [2024-11-29 03:09:18.953716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.224 [2024-11-29 03:09:18.977272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.224 [2024-11-29 03:09:18.977356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:03.224 [2024-11-29 03:09:18.977378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.490 ms 00:22:03.224 [2024-11-29 03:09:18.977392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.224 [2024-11-29 03:09:18.977469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.224 [2024-11-29 03:09:18.977494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:03.224 [2024-11-29 03:09:18.977510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:03.224 [2024-11-29 03:09:18.977533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.224 [2024-11-29 03:09:18.978244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.224 [2024-11-29 03:09:18.978291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:03.224 [2024-11-29 03:09:18.978312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:22:03.224 [2024-11-29 03:09:18.978328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.224 [2024-11-29 03:09:18.978561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.225 [2024-11-29 03:09:18.978587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:03.225 [2024-11-29 03:09:18.978602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:22:03.225 [2024-11-29 03:09:18.978622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.225 [2024-11-29 03:09:18.986922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.225 [2024-11-29 03:09:18.986964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:03.225 [2024-11-29 03:09:18.986981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.262 ms 00:22:03.225 [2024-11-29 03:09:18.986990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.225 [2024-11-29 03:09:18.990756] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:03.225 [2024-11-29 03:09:18.990805] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:03.225 [2024-11-29 03:09:18.990821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.225 [2024-11-29 03:09:18.990854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:03.225 [2024-11-29 03:09:18.990864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.741 ms 00:22:03.225 [2024-11-29 03:09:18.990871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.225 [2024-11-29 03:09:19.006620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.225 [2024-11-29 03:09:19.006667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:03.225 [2024-11-29 03:09:19.006685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.691 ms 00:22:03.225 [2024-11-29 03:09:19.006693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.225 [2024-11-29 03:09:19.009711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.225 [2024-11-29 03:09:19.009755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:03.225 [2024-11-29 03:09:19.009766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.965 ms 00:22:03.225 [2024-11-29 03:09:19.009773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.225 [2024-11-29 03:09:19.012331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.225 [2024-11-29 03:09:19.012501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:03.225 [2024-11-29 03:09:19.012519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.513 ms 00:22:03.225 [2024-11-29 03:09:19.012526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.225 [2024-11-29 03:09:19.012882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.225 [2024-11-29 03:09:19.012899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:03.225 [2024-11-29 03:09:19.012909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:22:03.225 [2024-11-29 03:09:19.012921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.225 [2024-11-29 03:09:19.036214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.225 [2024-11-29 03:09:19.036277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:03.225 [2024-11-29 03:09:19.036290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.269 ms 00:22:03.225 [2024-11-29 03:09:19.036299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.225 [2024-11-29 03:09:19.044485] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:03.225 [2024-11-29 03:09:19.047539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.225 [2024-11-29 03:09:19.047713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:03.225 [2024-11-29 03:09:19.047732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.186 ms 00:22:03.225 [2024-11-29 03:09:19.047756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.225 [2024-11-29 03:09:19.047868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.225 [2024-11-29 03:09:19.047883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:03.225 [2024-11-29 03:09:19.047901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:03.225 [2024-11-29 03:09:19.047909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.225 [2024-11-29 03:09:19.047980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.225 [2024-11-29 03:09:19.047998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:03.225 [2024-11-29 03:09:19.048007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:03.225 [2024-11-29 03:09:19.048015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.225 [2024-11-29 03:09:19.048039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.225 [2024-11-29 03:09:19.048048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:03.225 [2024-11-29 03:09:19.048057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:03.225 [2024-11-29 03:09:19.048066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.225 [2024-11-29 03:09:19.048101] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:03.225 [2024-11-29 03:09:19.048112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.225 [2024-11-29 03:09:19.048124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:03.225 [2024-11-29 03:09:19.048132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:03.225 [2024-11-29 03:09:19.048140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.225 [2024-11-29 03:09:19.053658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.225 [2024-11-29 03:09:19.053731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:03.225 [2024-11-29 03:09:19.053744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.496 ms 00:22:03.225 [2024-11-29 03:09:19.053762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.225 [2024-11-29 03:09:19.053870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.225 [2024-11-29 03:09:19.053883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:03.225 [2024-11-29 03:09:19.053899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:03.225 [2024-11-29 03:09:19.053907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.225 [2024-11-29 03:09:19.055081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.404 ms, result 0 00:22:04.166  [2024-11-29T03:09:21.105Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-29T03:09:22.539Z] Copying: 31/1024 [MB] (14 MBps) [2024-11-29T03:09:23.128Z] Copying: 61/1024 [MB] (30 MBps) [2024-11-29T03:09:24.074Z] Copying: 103/1024 [MB] (42 MBps) [2024-11-29T03:09:25.462Z] Copying: 114/1024 [MB] (11 MBps) [2024-11-29T03:09:26.428Z] Copying: 133/1024 [MB] (18 MBps) [2024-11-29T03:09:27.372Z] Copying: 146/1024 [MB] (13 MBps) [2024-11-29T03:09:28.316Z] Copying: 162/1024 [MB] (15 MBps) [2024-11-29T03:09:29.257Z] Copying: 180/1024 [MB] (17 MBps) [2024-11-29T03:09:30.199Z] Copying: 192/1024 [MB] (12 MBps) [2024-11-29T03:09:31.143Z] Copying: 208/1024 [MB] (15 MBps) [2024-11-29T03:09:32.088Z] Copying: 223/1024 [MB] (15 MBps) [2024-11-29T03:09:33.477Z] Copying: 234/1024 [MB] (10 MBps) [2024-11-29T03:09:34.431Z] Copying: 249/1024 [MB] (15 MBps) [2024-11-29T03:09:35.369Z] Copying: 265/1024 [MB] (15 MBps) [2024-11-29T03:09:36.305Z] Copying: 292/1024 [MB] (27 MBps) [2024-11-29T03:09:37.248Z] Copying: 309/1024 [MB] (16 MBps) [2024-11-29T03:09:38.192Z] Copying: 328/1024 [MB] (19 MBps) [2024-11-29T03:09:39.137Z] Copying: 344/1024 [MB] (15 MBps) [2024-11-29T03:09:40.081Z] Copying: 361/1024 [MB] (16 MBps) [2024-11-29T03:09:41.466Z] Copying: 380/1024 [MB] (19 MBps) [2024-11-29T03:09:42.409Z] Copying: 401/1024 [MB] (21 MBps) [2024-11-29T03:09:43.353Z] Copying: 417/1024 [MB] (15 MBps) [2024-11-29T03:09:44.297Z] Copying: 434/1024 [MB] (17 MBps) [2024-11-29T03:09:45.260Z] Copying: 448/1024 [MB] (13 MBps) [2024-11-29T03:09:46.211Z] Copying: 469/1024 [MB] (21 MBps) [2024-11-29T03:09:47.157Z] Copying: 487/1024 [MB] (18 MBps) [2024-11-29T03:09:48.102Z] Copying: 498/1024 [MB] (10 MBps) [2024-11-29T03:09:49.489Z] Copying: 508/1024 [MB] (10 MBps) [2024-11-29T03:09:50.436Z] Copying: 522/1024 [MB] (13 MBps) [2024-11-29T03:09:51.381Z] Copying: 538/1024 [MB] (16 MBps) [2024-11-29T03:09:52.326Z] Copying: 555/1024 [MB] (17 MBps) [2024-11-29T03:09:53.272Z] Copying: 571/1024 [MB] (15 MBps) [2024-11-29T03:09:54.227Z] Copying: 589/1024 [MB] (18 MBps) [2024-11-29T03:09:55.212Z] Copying: 609/1024 [MB] (19 MBps) [2024-11-29T03:09:56.157Z] Copying: 628/1024 [MB] (19 MBps) [2024-11-29T03:09:57.102Z] Copying: 647/1024 [MB] (18 MBps) [2024-11-29T03:09:58.489Z] Copying: 665/1024 [MB] (17 MBps) [2024-11-29T03:09:59.435Z] Copying: 682/1024 [MB] (17 MBps) [2024-11-29T03:10:00.374Z] Copying: 694/1024 [MB] (12 MBps) [2024-11-29T03:10:01.313Z] Copying: 704/1024 [MB] (10 MBps) [2024-11-29T03:10:02.256Z] Copying: 740/1024 [MB] (35 MBps) [2024-11-29T03:10:03.198Z] Copying: 761/1024 [MB] (20 MBps) [2024-11-29T03:10:04.141Z] Copying: 778/1024 [MB] (17 MBps) [2024-11-29T03:10:05.087Z] Copying: 797/1024 [MB] (19 MBps) [2024-11-29T03:10:06.475Z] Copying: 815/1024 [MB] (18 MBps) [2024-11-29T03:10:07.421Z] Copying: 832/1024 [MB] (16 MBps) [2024-11-29T03:10:08.363Z] Copying: 846/1024 [MB] (14 MBps) [2024-11-29T03:10:09.308Z] Copying: 862/1024 [MB] (15 MBps) [2024-11-29T03:10:10.253Z] Copying: 881/1024 [MB] (18 MBps) [2024-11-29T03:10:11.199Z] Copying: 904/1024 [MB] (23 MBps) [2024-11-29T03:10:12.146Z] Copying: 924/1024 [MB] (19 MBps) [2024-11-29T03:10:13.090Z] Copying: 934/1024 [MB] (10 MBps) [2024-11-29T03:10:14.469Z] Copying: 952/1024 [MB] (17 MBps) [2024-11-29T03:10:15.410Z] Copying: 996/1024 [MB] (43 MBps) [2024-11-29T03:10:16.355Z] Copying: 1017/1024 [MB] (20 MBps) [2024-11-29T03:10:16.617Z] Copying: 1048120/1048576 [kB] (6372 kBps) [2024-11-29T03:10:16.617Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-29 03:10:16.511100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.625 [2024-11-29 03:10:16.511180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:00.625 [2024-11-29 03:10:16.511199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:00.625 [2024-11-29 03:10:16.511209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.625 [2024-11-29 03:10:16.512714] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:00.625 [2024-11-29 03:10:16.513791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.625 [2024-11-29 03:10:16.513863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:00.625 [2024-11-29 03:10:16.513875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:23:00.625 [2024-11-29 03:10:16.513884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.625 [2024-11-29 03:10:16.526045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.625 [2024-11-29 03:10:16.526094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:00.625 [2024-11-29 03:10:16.526107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.939 ms 00:23:00.625 [2024-11-29 03:10:16.526116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.625 [2024-11-29 03:10:16.548863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.625 [2024-11-29 03:10:16.548914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:00.625 [2024-11-29 03:10:16.548928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.727 ms 00:23:00.625 [2024-11-29 03:10:16.548944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.625 [2024-11-29 03:10:16.555150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.625 [2024-11-29 03:10:16.555195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:00.625 [2024-11-29 03:10:16.555208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.165 ms 00:23:00.625 [2024-11-29 03:10:16.555217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.625 [2024-11-29 03:10:16.558370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.625 [2024-11-29 03:10:16.558421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:00.625 [2024-11-29 03:10:16.558433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.088 ms 00:23:00.625 [2024-11-29 03:10:16.558441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.625 [2024-11-29 03:10:16.562609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.625 [2024-11-29 03:10:16.562673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:00.625 [2024-11-29 03:10:16.562689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.124 ms 00:23:00.625 [2024-11-29 03:10:16.562697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.887 [2024-11-29 03:10:16.697781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.887 [2024-11-29 03:10:16.697870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:00.887 [2024-11-29 03:10:16.697884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 135.032 ms 00:23:00.887 [2024-11-29 03:10:16.697892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.887 [2024-11-29 03:10:16.700355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.887 [2024-11-29 03:10:16.700560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:00.887 [2024-11-29 03:10:16.700581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.446 ms 00:23:00.887 [2024-11-29 03:10:16.700588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.887 [2024-11-29 03:10:16.702657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.887 [2024-11-29 03:10:16.702707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:00.887 [2024-11-29 03:10:16.702717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.028 ms 00:23:00.887 [2024-11-29 03:10:16.702724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.887 [2024-11-29 03:10:16.704426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.887 [2024-11-29 03:10:16.704475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:00.887 [2024-11-29 03:10:16.704484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.659 ms 00:23:00.887 [2024-11-29 03:10:16.704492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.887 [2024-11-29 03:10:16.706203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.887 [2024-11-29 03:10:16.706252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:00.887 [2024-11-29 03:10:16.706261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.640 ms 00:23:00.887 [2024-11-29 03:10:16.706269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.887 [2024-11-29 03:10:16.706311] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:00.887 [2024-11-29 03:10:16.706325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102144 / 261120 wr_cnt: 1 state: open 00:23:00.887 [2024-11-29 03:10:16.706336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:00.887 [2024-11-29 03:10:16.706597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.706995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:00.888 [2024-11-29 03:10:16.707155] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:00.888 [2024-11-29 03:10:16.707163] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4399c0a4-156c-4cb4-889f-ab275081974c 00:23:00.888 [2024-11-29 03:10:16.707175] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102144 00:23:00.888 [2024-11-29 03:10:16.707185] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103104 00:23:00.888 [2024-11-29 03:10:16.707198] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102144 00:23:00.888 [2024-11-29 03:10:16.707207] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:23:00.888 [2024-11-29 03:10:16.707214] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:00.888 [2024-11-29 03:10:16.707223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:00.888 [2024-11-29 03:10:16.707230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:00.888 [2024-11-29 03:10:16.707237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:00.888 [2024-11-29 03:10:16.707244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:00.888 [2024-11-29 03:10:16.707251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.888 [2024-11-29 03:10:16.707259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:00.888 [2024-11-29 03:10:16.707269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:23:00.888 [2024-11-29 03:10:16.707277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.888 [2024-11-29 03:10:16.709916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.888 [2024-11-29 03:10:16.710062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:00.888 [2024-11-29 03:10:16.710118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.620 ms 00:23:00.888 [2024-11-29 03:10:16.710144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.888 [2024-11-29 03:10:16.710278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.888 [2024-11-29 03:10:16.710307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:00.888 [2024-11-29 03:10:16.710391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:23:00.888 [2024-11-29 03:10:16.710421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.888 [2024-11-29 03:10:16.717981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.888 [2024-11-29 03:10:16.718145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:00.888 [2024-11-29 03:10:16.718203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.888 [2024-11-29 03:10:16.718236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.888 [2024-11-29 03:10:16.718309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.888 [2024-11-29 03:10:16.718331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:00.888 [2024-11-29 03:10:16.718351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.888 [2024-11-29 03:10:16.718374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.888 [2024-11-29 03:10:16.718455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.888 [2024-11-29 03:10:16.718546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:00.889 [2024-11-29 03:10:16.718567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.889 [2024-11-29 03:10:16.718587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.889 [2024-11-29 03:10:16.718619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.889 [2024-11-29 03:10:16.718642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:00.889 [2024-11-29 03:10:16.718662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.889 [2024-11-29 03:10:16.718977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.889 [2024-11-29 03:10:16.732212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.889 [2024-11-29 03:10:16.732382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:00.889 [2024-11-29 03:10:16.732436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.889 [2024-11-29 03:10:16.732458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.889 [2024-11-29 03:10:16.742354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.889 [2024-11-29 03:10:16.742510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:00.889 [2024-11-29 03:10:16.742562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.889 [2024-11-29 03:10:16.742592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.889 [2024-11-29 03:10:16.742652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.889 [2024-11-29 03:10:16.742682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:00.889 [2024-11-29 03:10:16.742702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.889 [2024-11-29 03:10:16.742720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.889 [2024-11-29 03:10:16.742769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.889 [2024-11-29 03:10:16.742791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:00.889 [2024-11-29 03:10:16.742812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.889 [2024-11-29 03:10:16.742894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.889 [2024-11-29 03:10:16.742998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.889 [2024-11-29 03:10:16.743011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:00.889 [2024-11-29 03:10:16.743020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.889 [2024-11-29 03:10:16.743028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.889 [2024-11-29 03:10:16.743058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.889 [2024-11-29 03:10:16.743068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:00.889 [2024-11-29 03:10:16.743076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.889 [2024-11-29 03:10:16.743084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.889 [2024-11-29 03:10:16.743123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.889 [2024-11-29 03:10:16.743133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:00.889 [2024-11-29 03:10:16.743141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.889 [2024-11-29 03:10:16.743148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.889 [2024-11-29 03:10:16.743194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.889 [2024-11-29 03:10:16.743204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:00.889 [2024-11-29 03:10:16.743213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.889 [2024-11-29 03:10:16.743227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.889 [2024-11-29 03:10:16.743361] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 234.414 ms, result 0 00:23:01.829 00:23:01.829 00:23:01.829 03:10:17 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:01.829 [2024-11-29 03:10:17.759019] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:23:01.829 [2024-11-29 03:10:17.759378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90276 ] 00:23:02.090 [2024-11-29 03:10:17.907709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.090 [2024-11-29 03:10:17.936978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.090 [2024-11-29 03:10:18.050231] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:02.090 [2024-11-29 03:10:18.050558] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:02.354 [2024-11-29 03:10:18.211594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.354 [2024-11-29 03:10:18.211656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:02.354 [2024-11-29 03:10:18.211671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:02.354 [2024-11-29 03:10:18.211680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.354 [2024-11-29 03:10:18.211742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.354 [2024-11-29 03:10:18.211752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.354 [2024-11-29 03:10:18.211761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:02.354 [2024-11-29 03:10:18.211778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.354 [2024-11-29 03:10:18.211806] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:02.354 [2024-11-29 03:10:18.212110] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:02.354 [2024-11-29 03:10:18.212129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.354 [2024-11-29 03:10:18.212138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.354 [2024-11-29 03:10:18.212150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:23:02.354 [2024-11-29 03:10:18.212158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.354 [2024-11-29 03:10:18.214053] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:02.354 [2024-11-29 03:10:18.217957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.354 [2024-11-29 03:10:18.218015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:02.354 [2024-11-29 03:10:18.218027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.908 ms 00:23:02.354 [2024-11-29 03:10:18.218041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.354 [2024-11-29 03:10:18.218112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.354 [2024-11-29 03:10:18.218123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:02.354 [2024-11-29 03:10:18.218132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:02.354 [2024-11-29 03:10:18.218139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.354 [2024-11-29 03:10:18.226355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.354 [2024-11-29 03:10:18.226399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.354 [2024-11-29 03:10:18.226417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.171 ms 00:23:02.354 [2024-11-29 03:10:18.226428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.354 [2024-11-29 03:10:18.226521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.354 [2024-11-29 03:10:18.226530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.354 [2024-11-29 03:10:18.226539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:02.354 [2024-11-29 03:10:18.226547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.354 [2024-11-29 03:10:18.226613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.354 [2024-11-29 03:10:18.226624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:02.354 [2024-11-29 03:10:18.226632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:02.354 [2024-11-29 03:10:18.226648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.354 [2024-11-29 03:10:18.226669] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:02.354 [2024-11-29 03:10:18.228781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.354 [2024-11-29 03:10:18.228817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.354 [2024-11-29 03:10:18.228856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.116 ms 00:23:02.354 [2024-11-29 03:10:18.228868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.354 [2024-11-29 03:10:18.228907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.354 [2024-11-29 03:10:18.228917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:02.354 [2024-11-29 03:10:18.228925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:02.354 [2024-11-29 03:10:18.228936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.354 [2024-11-29 03:10:18.228958] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:02.354 [2024-11-29 03:10:18.228980] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:02.354 [2024-11-29 03:10:18.229021] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:02.354 [2024-11-29 03:10:18.229045] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:02.354 [2024-11-29 03:10:18.229149] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:02.354 [2024-11-29 03:10:18.229161] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:02.354 [2024-11-29 03:10:18.229175] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:02.354 [2024-11-29 03:10:18.229185] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:02.354 [2024-11-29 03:10:18.229194] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:02.354 [2024-11-29 03:10:18.229203] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:02.355 [2024-11-29 03:10:18.229210] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:02.355 [2024-11-29 03:10:18.229221] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:02.355 [2024-11-29 03:10:18.229231] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:02.355 [2024-11-29 03:10:18.229240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.355 [2024-11-29 03:10:18.229247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:02.355 [2024-11-29 03:10:18.229255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:23:02.355 [2024-11-29 03:10:18.229263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.355 [2024-11-29 03:10:18.229349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.355 [2024-11-29 03:10:18.229358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:02.355 [2024-11-29 03:10:18.229366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:02.355 [2024-11-29 03:10:18.229373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.355 [2024-11-29 03:10:18.229479] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:02.355 [2024-11-29 03:10:18.229497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:02.355 [2024-11-29 03:10:18.229510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.355 [2024-11-29 03:10:18.229519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.355 [2024-11-29 03:10:18.229531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:02.355 [2024-11-29 03:10:18.229540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:02.355 [2024-11-29 03:10:18.229552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:02.355 [2024-11-29 03:10:18.229560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:02.355 [2024-11-29 03:10:18.229568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:02.355 [2024-11-29 03:10:18.229576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.355 [2024-11-29 03:10:18.229586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:02.355 [2024-11-29 03:10:18.229594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:02.355 [2024-11-29 03:10:18.229601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.355 [2024-11-29 03:10:18.229609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:02.355 [2024-11-29 03:10:18.229617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:02.355 [2024-11-29 03:10:18.229625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.355 [2024-11-29 03:10:18.229633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:02.355 [2024-11-29 03:10:18.229641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:02.355 [2024-11-29 03:10:18.229648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.355 [2024-11-29 03:10:18.229656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:02.355 [2024-11-29 03:10:18.229664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:02.355 [2024-11-29 03:10:18.229672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.355 [2024-11-29 03:10:18.229682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:02.355 [2024-11-29 03:10:18.229689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:02.355 [2024-11-29 03:10:18.229697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.355 [2024-11-29 03:10:18.229704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:02.355 [2024-11-29 03:10:18.229712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:02.355 [2024-11-29 03:10:18.229719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.355 [2024-11-29 03:10:18.229727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:02.355 [2024-11-29 03:10:18.229735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:02.355 [2024-11-29 03:10:18.229758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.355 [2024-11-29 03:10:18.229766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:02.355 [2024-11-29 03:10:18.229773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:02.355 [2024-11-29 03:10:18.229781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.355 [2024-11-29 03:10:18.229788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:02.355 [2024-11-29 03:10:18.229796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:02.355 [2024-11-29 03:10:18.229803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.355 [2024-11-29 03:10:18.229812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:02.355 [2024-11-29 03:10:18.229822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:02.355 [2024-11-29 03:10:18.229855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.355 [2024-11-29 03:10:18.229864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:02.355 [2024-11-29 03:10:18.229872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:02.355 [2024-11-29 03:10:18.229881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.355 [2024-11-29 03:10:18.229890] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:02.355 [2024-11-29 03:10:18.229903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:02.355 [2024-11-29 03:10:18.229911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.355 [2024-11-29 03:10:18.229920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.355 [2024-11-29 03:10:18.229929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:02.355 [2024-11-29 03:10:18.229938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:02.355 [2024-11-29 03:10:18.229946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:02.355 [2024-11-29 03:10:18.229954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:02.355 [2024-11-29 03:10:18.229971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:02.355 [2024-11-29 03:10:18.229978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:02.355 [2024-11-29 03:10:18.229986] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:02.355 [2024-11-29 03:10:18.229998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.355 [2024-11-29 03:10:18.230006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:02.355 [2024-11-29 03:10:18.230014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:02.355 [2024-11-29 03:10:18.230021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:02.355 [2024-11-29 03:10:18.230029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:02.355 [2024-11-29 03:10:18.230036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:02.355 [2024-11-29 03:10:18.230043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:02.355 [2024-11-29 03:10:18.230051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:02.355 [2024-11-29 03:10:18.230059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:02.355 [2024-11-29 03:10:18.230066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:02.355 [2024-11-29 03:10:18.230078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:02.355 [2024-11-29 03:10:18.230085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:02.355 [2024-11-29 03:10:18.230092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:02.355 [2024-11-29 03:10:18.230098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:02.355 [2024-11-29 03:10:18.230105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:02.355 [2024-11-29 03:10:18.230113] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:02.355 [2024-11-29 03:10:18.230124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.355 [2024-11-29 03:10:18.230133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:02.355 [2024-11-29 03:10:18.230140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:02.355 [2024-11-29 03:10:18.230147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:02.355 [2024-11-29 03:10:18.230157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:02.355 [2024-11-29 03:10:18.230165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.355 [2024-11-29 03:10:18.230173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:02.355 [2024-11-29 03:10:18.230185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:23:02.355 [2024-11-29 03:10:18.230195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.355 [2024-11-29 03:10:18.244387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.355 [2024-11-29 03:10:18.244431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.355 [2024-11-29 03:10:18.244442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.138 ms 00:23:02.355 [2024-11-29 03:10:18.244450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.355 [2024-11-29 03:10:18.244538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.355 [2024-11-29 03:10:18.244547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:02.355 [2024-11-29 03:10:18.244555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:02.355 [2024-11-29 03:10:18.244564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.355 [2024-11-29 03:10:18.267743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.268036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.356 [2024-11-29 03:10:18.268070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.122 ms 00:23:02.356 [2024-11-29 03:10:18.268093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.356 [2024-11-29 03:10:18.268166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.268183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.356 [2024-11-29 03:10:18.268199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:02.356 [2024-11-29 03:10:18.268211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.356 [2024-11-29 03:10:18.268858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.268911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.356 [2024-11-29 03:10:18.268930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:23:02.356 [2024-11-29 03:10:18.268943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.356 [2024-11-29 03:10:18.269178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.269196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.356 [2024-11-29 03:10:18.269211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:23:02.356 [2024-11-29 03:10:18.269225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.356 [2024-11-29 03:10:18.277535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.277590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.356 [2024-11-29 03:10:18.277601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.270 ms 00:23:02.356 [2024-11-29 03:10:18.277610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.356 [2024-11-29 03:10:18.281522] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:02.356 [2024-11-29 03:10:18.281575] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:02.356 [2024-11-29 03:10:18.281593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.281601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:02.356 [2024-11-29 03:10:18.281610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.884 ms 00:23:02.356 [2024-11-29 03:10:18.281618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.356 [2024-11-29 03:10:18.299688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.299889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:02.356 [2024-11-29 03:10:18.299911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.015 ms 00:23:02.356 [2024-11-29 03:10:18.299920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.356 [2024-11-29 03:10:18.302760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.302811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:02.356 [2024-11-29 03:10:18.302822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.793 ms 00:23:02.356 [2024-11-29 03:10:18.302844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.356 [2024-11-29 03:10:18.305472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.305519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:02.356 [2024-11-29 03:10:18.305531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:23:02.356 [2024-11-29 03:10:18.305539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.356 [2024-11-29 03:10:18.305924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.305945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:02.356 [2024-11-29 03:10:18.305956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:23:02.356 [2024-11-29 03:10:18.305968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.356 [2024-11-29 03:10:18.329173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.329235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:02.356 [2024-11-29 03:10:18.329249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.183 ms 00:23:02.356 [2024-11-29 03:10:18.329267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.356 [2024-11-29 03:10:18.337317] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:02.356 [2024-11-29 03:10:18.340391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.340439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:02.356 [2024-11-29 03:10:18.340452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.070 ms 00:23:02.356 [2024-11-29 03:10:18.340461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.356 [2024-11-29 03:10:18.340539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.340550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:02.356 [2024-11-29 03:10:18.340565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:02.356 [2024-11-29 03:10:18.340580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.356 [2024-11-29 03:10:18.342364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.342498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:02.356 [2024-11-29 03:10:18.342551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.746 ms 00:23:02.356 [2024-11-29 03:10:18.342583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.356 [2024-11-29 03:10:18.342624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.342653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:02.356 [2024-11-29 03:10:18.342674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:02.356 [2024-11-29 03:10:18.342698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.356 [2024-11-29 03:10:18.342748] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:02.356 [2024-11-29 03:10:18.342776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.356 [2024-11-29 03:10:18.342796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:02.618 [2024-11-29 03:10:18.342976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:02.618 [2024-11-29 03:10:18.342995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.618 [2024-11-29 03:10:18.348421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.618 [2024-11-29 03:10:18.348587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:02.618 [2024-11-29 03:10:18.348606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.389 ms 00:23:02.618 [2024-11-29 03:10:18.348614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.618 [2024-11-29 03:10:18.348696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.618 [2024-11-29 03:10:18.348706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:02.618 [2024-11-29 03:10:18.348716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:02.618 [2024-11-29 03:10:18.348726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.618 [2024-11-29 03:10:18.349972] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 137.860 ms, result 0 00:23:03.563  [2024-11-29T03:10:20.943Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-29T03:10:21.886Z] Copying: 29/1024 [MB] (16 MBps) [2024-11-29T03:10:22.830Z] Copying: 47/1024 [MB] (17 MBps) [2024-11-29T03:10:23.775Z] Copying: 58/1024 [MB] (11 MBps) [2024-11-29T03:10:24.739Z] Copying: 69/1024 [MB] (10 MBps) [2024-11-29T03:10:25.754Z] Copying: 88/1024 [MB] (19 MBps) [2024-11-29T03:10:26.698Z] Copying: 106/1024 [MB] (18 MBps) [2024-11-29T03:10:27.641Z] Copying: 119/1024 [MB] (13 MBps) [2024-11-29T03:10:28.585Z] Copying: 133/1024 [MB] (13 MBps) [2024-11-29T03:10:29.974Z] Copying: 152/1024 [MB] (18 MBps) [2024-11-29T03:10:30.548Z] Copying: 168/1024 [MB] (16 MBps) [2024-11-29T03:10:31.940Z] Copying: 179/1024 [MB] (11 MBps) [2024-11-29T03:10:32.887Z] Copying: 190/1024 [MB] (10 MBps) [2024-11-29T03:10:33.831Z] Copying: 201/1024 [MB] (10 MBps) [2024-11-29T03:10:34.772Z] Copying: 211/1024 [MB] (10 MBps) [2024-11-29T03:10:35.711Z] Copying: 236/1024 [MB] (24 MBps) [2024-11-29T03:10:36.653Z] Copying: 252/1024 [MB] (16 MBps) [2024-11-29T03:10:37.599Z] Copying: 268/1024 [MB] (15 MBps) [2024-11-29T03:10:38.544Z] Copying: 281/1024 [MB] (13 MBps) [2024-11-29T03:10:39.933Z] Copying: 302/1024 [MB] (21 MBps) [2024-11-29T03:10:40.878Z] Copying: 324/1024 [MB] (21 MBps) [2024-11-29T03:10:41.820Z] Copying: 347/1024 [MB] (23 MBps) [2024-11-29T03:10:42.765Z] Copying: 366/1024 [MB] (18 MBps) [2024-11-29T03:10:43.708Z] Copying: 382/1024 [MB] (16 MBps) [2024-11-29T03:10:44.650Z] Copying: 401/1024 [MB] (18 MBps) [2024-11-29T03:10:45.594Z] Copying: 422/1024 [MB] (21 MBps) [2024-11-29T03:10:46.538Z] Copying: 440/1024 [MB] (17 MBps) [2024-11-29T03:10:47.929Z] Copying: 454/1024 [MB] (14 MBps) [2024-11-29T03:10:48.875Z] Copying: 471/1024 [MB] (16 MBps) [2024-11-29T03:10:49.820Z] Copying: 481/1024 [MB] (10 MBps) [2024-11-29T03:10:50.766Z] Copying: 492/1024 [MB] (10 MBps) [2024-11-29T03:10:51.713Z] Copying: 503/1024 [MB] (10 MBps) [2024-11-29T03:10:52.658Z] Copying: 520/1024 [MB] (16 MBps) [2024-11-29T03:10:53.604Z] Copying: 533/1024 [MB] (12 MBps) [2024-11-29T03:10:54.550Z] Copying: 553/1024 [MB] (20 MBps) [2024-11-29T03:10:55.939Z] Copying: 573/1024 [MB] (19 MBps) [2024-11-29T03:10:56.884Z] Copying: 592/1024 [MB] (19 MBps) [2024-11-29T03:10:57.916Z] Copying: 605/1024 [MB] (13 MBps) [2024-11-29T03:10:58.863Z] Copying: 619/1024 [MB] (14 MBps) [2024-11-29T03:10:59.810Z] Copying: 631/1024 [MB] (11 MBps) [2024-11-29T03:11:00.755Z] Copying: 645/1024 [MB] (13 MBps) [2024-11-29T03:11:01.702Z] Copying: 656/1024 [MB] (11 MBps) [2024-11-29T03:11:02.647Z] Copying: 674/1024 [MB] (17 MBps) [2024-11-29T03:11:03.594Z] Copying: 690/1024 [MB] (16 MBps) [2024-11-29T03:11:04.539Z] Copying: 704/1024 [MB] (13 MBps) [2024-11-29T03:11:05.924Z] Copying: 724/1024 [MB] (19 MBps) [2024-11-29T03:11:06.869Z] Copying: 748/1024 [MB] (24 MBps) [2024-11-29T03:11:07.814Z] Copying: 761/1024 [MB] (13 MBps) [2024-11-29T03:11:08.757Z] Copying: 784/1024 [MB] (22 MBps) [2024-11-29T03:11:09.698Z] Copying: 802/1024 [MB] (18 MBps) [2024-11-29T03:11:10.643Z] Copying: 813/1024 [MB] (10 MBps) [2024-11-29T03:11:11.588Z] Copying: 823/1024 [MB] (10 MBps) [2024-11-29T03:11:12.975Z] Copying: 834/1024 [MB] (10 MBps) [2024-11-29T03:11:13.549Z] Copying: 844/1024 [MB] (10 MBps) [2024-11-29T03:11:14.937Z] Copying: 855/1024 [MB] (10 MBps) [2024-11-29T03:11:15.879Z] Copying: 865/1024 [MB] (10 MBps) [2024-11-29T03:11:16.821Z] Copying: 876/1024 [MB] (10 MBps) [2024-11-29T03:11:17.765Z] Copying: 887/1024 [MB] (10 MBps) [2024-11-29T03:11:18.711Z] Copying: 904/1024 [MB] (17 MBps) [2024-11-29T03:11:19.657Z] Copying: 920/1024 [MB] (15 MBps) [2024-11-29T03:11:20.601Z] Copying: 935/1024 [MB] (14 MBps) [2024-11-29T03:11:21.545Z] Copying: 952/1024 [MB] (17 MBps) [2024-11-29T03:11:22.931Z] Copying: 968/1024 [MB] (16 MBps) [2024-11-29T03:11:23.874Z] Copying: 991/1024 [MB] (23 MBps) [2024-11-29T03:11:24.821Z] Copying: 1012/1024 [MB] (20 MBps) [2024-11-29T03:11:24.821Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-29 03:11:24.777169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.829 [2024-11-29 03:11:24.777288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:08.829 [2024-11-29 03:11:24.777314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:08.829 [2024-11-29 03:11:24.777331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.829 [2024-11-29 03:11:24.777373] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:08.829 [2024-11-29 03:11:24.778366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.829 [2024-11-29 03:11:24.778406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:08.829 [2024-11-29 03:11:24.778427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:24:08.829 [2024-11-29 03:11:24.778437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.829 [2024-11-29 03:11:24.778684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.829 [2024-11-29 03:11:24.778778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:08.829 [2024-11-29 03:11:24.778791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:24:08.829 [2024-11-29 03:11:24.778806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.829 [2024-11-29 03:11:24.785540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.829 [2024-11-29 03:11:24.785750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:08.829 [2024-11-29 03:11:24.785772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.714 ms 00:24:08.829 [2024-11-29 03:11:24.785782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.829 [2024-11-29 03:11:24.792231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.829 [2024-11-29 03:11:24.792281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:08.829 [2024-11-29 03:11:24.792295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.313 ms 00:24:08.829 [2024-11-29 03:11:24.792303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.829 [2024-11-29 03:11:24.795093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.829 [2024-11-29 03:11:24.795147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:08.829 [2024-11-29 03:11:24.795158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.710 ms 00:24:08.829 [2024-11-29 03:11:24.795165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.829 [2024-11-29 03:11:24.799685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.829 [2024-11-29 03:11:24.799742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:08.829 [2024-11-29 03:11:24.799765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.473 ms 00:24:08.829 [2024-11-29 03:11:24.799777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.092 [2024-11-29 03:11:24.967780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.092 [2024-11-29 03:11:24.967865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:09.092 [2024-11-29 03:11:24.967885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 167.950 ms 00:24:09.092 [2024-11-29 03:11:24.967894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.092 [2024-11-29 03:11:24.970608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.092 [2024-11-29 03:11:24.970660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:09.092 [2024-11-29 03:11:24.970671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.692 ms 00:24:09.092 [2024-11-29 03:11:24.970679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.092 [2024-11-29 03:11:24.973072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.092 [2024-11-29 03:11:24.973271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:09.092 [2024-11-29 03:11:24.973291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.347 ms 00:24:09.092 [2024-11-29 03:11:24.973299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.092 [2024-11-29 03:11:24.975037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.092 [2024-11-29 03:11:24.975088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:09.092 [2024-11-29 03:11:24.975098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.700 ms 00:24:09.092 [2024-11-29 03:11:24.975105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.092 [2024-11-29 03:11:24.976780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.092 [2024-11-29 03:11:24.976969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:09.092 [2024-11-29 03:11:24.976989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:24:09.092 [2024-11-29 03:11:24.976997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.092 [2024-11-29 03:11:24.977033] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:09.092 [2024-11-29 03:11:24.977060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:09.092 [2024-11-29 03:11:24.977072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:09.092 [2024-11-29 03:11:24.977197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:09.093 [2024-11-29 03:11:24.977891] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:09.093 [2024-11-29 03:11:24.977900] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4399c0a4-156c-4cb4-889f-ab275081974c 00:24:09.093 [2024-11-29 03:11:24.977914] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:09.093 [2024-11-29 03:11:24.977925] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 29888 00:24:09.093 [2024-11-29 03:11:24.977941] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 28928 00:24:09.093 [2024-11-29 03:11:24.977950] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0332 00:24:09.093 [2024-11-29 03:11:24.977958] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:09.093 [2024-11-29 03:11:24.977970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:09.093 [2024-11-29 03:11:24.977978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:09.094 [2024-11-29 03:11:24.977985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:09.094 [2024-11-29 03:11:24.977992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:09.094 [2024-11-29 03:11:24.978000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.094 [2024-11-29 03:11:24.978008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:09.094 [2024-11-29 03:11:24.978017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:24:09.094 [2024-11-29 03:11:24.978025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.094 [2024-11-29 03:11:24.980360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.094 [2024-11-29 03:11:24.980524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:09.094 [2024-11-29 03:11:24.980542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.297 ms 00:24:09.094 [2024-11-29 03:11:24.980551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.094 [2024-11-29 03:11:24.980672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.094 [2024-11-29 03:11:24.980681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:09.094 [2024-11-29 03:11:24.980690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:24:09.094 [2024-11-29 03:11:24.980700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.094 [2024-11-29 03:11:24.988723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.094 [2024-11-29 03:11:24.988777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:09.094 [2024-11-29 03:11:24.988790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.094 [2024-11-29 03:11:24.988798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.094 [2024-11-29 03:11:24.988903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.094 [2024-11-29 03:11:24.988914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:09.094 [2024-11-29 03:11:24.988931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.094 [2024-11-29 03:11:24.988939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.094 [2024-11-29 03:11:24.989008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.094 [2024-11-29 03:11:24.989020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:09.094 [2024-11-29 03:11:24.989029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.094 [2024-11-29 03:11:24.989038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.094 [2024-11-29 03:11:24.989054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.094 [2024-11-29 03:11:24.989067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:09.094 [2024-11-29 03:11:24.989075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.094 [2024-11-29 03:11:24.989087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.094 [2024-11-29 03:11:25.003087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.094 [2024-11-29 03:11:25.003139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:09.094 [2024-11-29 03:11:25.003151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.094 [2024-11-29 03:11:25.003160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.094 [2024-11-29 03:11:25.013663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.094 [2024-11-29 03:11:25.013710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:09.094 [2024-11-29 03:11:25.013722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.094 [2024-11-29 03:11:25.013730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.094 [2024-11-29 03:11:25.013786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.094 [2024-11-29 03:11:25.013807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:09.094 [2024-11-29 03:11:25.013815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.094 [2024-11-29 03:11:25.014028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.094 [2024-11-29 03:11:25.014103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.094 [2024-11-29 03:11:25.014127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:09.094 [2024-11-29 03:11:25.014148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.094 [2024-11-29 03:11:25.014168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.094 [2024-11-29 03:11:25.014258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.094 [2024-11-29 03:11:25.014331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:09.094 [2024-11-29 03:11:25.014349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.094 [2024-11-29 03:11:25.014359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.094 [2024-11-29 03:11:25.014404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.094 [2024-11-29 03:11:25.014414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:09.094 [2024-11-29 03:11:25.014427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.094 [2024-11-29 03:11:25.014435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.094 [2024-11-29 03:11:25.014475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.094 [2024-11-29 03:11:25.014488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:09.094 [2024-11-29 03:11:25.014496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.094 [2024-11-29 03:11:25.014504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.094 [2024-11-29 03:11:25.014547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.094 [2024-11-29 03:11:25.014557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:09.094 [2024-11-29 03:11:25.014566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.094 [2024-11-29 03:11:25.014574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.094 [2024-11-29 03:11:25.014717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 237.523 ms, result 0 00:24:09.354 00:24:09.354 00:24:09.354 03:11:25 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:11.899 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:11.899 03:11:27 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:11.899 03:11:27 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:11.899 03:11:27 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:11.899 03:11:27 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:11.899 03:11:27 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:11.899 Process with pid 88121 is not found 00:24:11.899 Remove shared memory files 00:24:11.899 03:11:27 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 88121 00:24:11.899 03:11:27 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88121 ']' 00:24:11.899 03:11:27 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88121 00:24:11.899 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88121) - No such process 00:24:11.899 03:11:27 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 88121 is not found' 00:24:11.899 03:11:27 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:11.899 03:11:27 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:11.899 03:11:27 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:11.899 03:11:27 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:11.899 03:11:27 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:11.899 03:11:27 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:11.899 03:11:27 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:11.899 ************************************ 00:24:11.899 END TEST ftl_restore 00:24:11.899 ************************************ 00:24:11.899 00:24:11.899 real 4m37.547s 00:24:11.899 user 4m24.637s 00:24:11.899 sys 0m12.636s 00:24:11.899 03:11:27 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.899 03:11:27 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:11.899 03:11:27 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:11.899 03:11:27 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:11.899 03:11:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.899 03:11:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:11.899 ************************************ 00:24:11.899 START TEST ftl_dirty_shutdown 00:24:11.899 ************************************ 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:11.899 * Looking for test storage... 00:24:11.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:11.899 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:11.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.900 --rc genhtml_branch_coverage=1 00:24:11.900 --rc genhtml_function_coverage=1 00:24:11.900 --rc genhtml_legend=1 00:24:11.900 --rc geninfo_all_blocks=1 00:24:11.900 --rc geninfo_unexecuted_blocks=1 00:24:11.900 00:24:11.900 ' 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:11.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.900 --rc genhtml_branch_coverage=1 00:24:11.900 --rc genhtml_function_coverage=1 00:24:11.900 --rc genhtml_legend=1 00:24:11.900 --rc geninfo_all_blocks=1 00:24:11.900 --rc geninfo_unexecuted_blocks=1 00:24:11.900 00:24:11.900 ' 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:11.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.900 --rc genhtml_branch_coverage=1 00:24:11.900 --rc genhtml_function_coverage=1 00:24:11.900 --rc genhtml_legend=1 00:24:11.900 --rc geninfo_all_blocks=1 00:24:11.900 --rc geninfo_unexecuted_blocks=1 00:24:11.900 00:24:11.900 ' 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:11.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.900 --rc genhtml_branch_coverage=1 00:24:11.900 --rc genhtml_function_coverage=1 00:24:11.900 --rc genhtml_legend=1 00:24:11.900 --rc geninfo_all_blocks=1 00:24:11.900 --rc geninfo_unexecuted_blocks=1 00:24:11.900 00:24:11.900 ' 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=91051 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 91051 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91051 ']' 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.900 03:11:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:12.161 [2024-11-29 03:11:27.926126] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:24:12.161 [2024-11-29 03:11:27.926487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91051 ] 00:24:12.161 [2024-11-29 03:11:28.071080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.161 [2024-11-29 03:11:28.099697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.109 03:11:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.109 03:11:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:13.109 03:11:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:13.109 03:11:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:13.109 03:11:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:13.109 03:11:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:13.109 03:11:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:13.109 03:11:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:13.109 03:11:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:13.109 03:11:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:13.109 03:11:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:13.109 03:11:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:13.109 03:11:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:13.109 03:11:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:13.109 03:11:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:13.109 03:11:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:13.436 03:11:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:13.436 { 00:24:13.436 "name": "nvme0n1", 00:24:13.436 "aliases": [ 00:24:13.436 "697f33f3-dee3-4651-923c-abc8750202ab" 00:24:13.436 ], 00:24:13.436 "product_name": "NVMe disk", 00:24:13.436 "block_size": 4096, 00:24:13.436 "num_blocks": 1310720, 00:24:13.436 "uuid": "697f33f3-dee3-4651-923c-abc8750202ab", 00:24:13.436 "numa_id": -1, 00:24:13.436 "assigned_rate_limits": { 00:24:13.436 "rw_ios_per_sec": 0, 00:24:13.436 "rw_mbytes_per_sec": 0, 00:24:13.436 "r_mbytes_per_sec": 0, 00:24:13.436 "w_mbytes_per_sec": 0 00:24:13.436 }, 00:24:13.436 "claimed": true, 00:24:13.436 "claim_type": "read_many_write_one", 00:24:13.436 "zoned": false, 00:24:13.436 "supported_io_types": { 00:24:13.436 "read": true, 00:24:13.436 "write": true, 00:24:13.436 "unmap": true, 00:24:13.436 "flush": true, 00:24:13.436 "reset": true, 00:24:13.436 "nvme_admin": true, 00:24:13.436 "nvme_io": true, 00:24:13.436 "nvme_io_md": false, 00:24:13.436 "write_zeroes": true, 00:24:13.436 "zcopy": false, 00:24:13.436 "get_zone_info": false, 00:24:13.436 "zone_management": false, 00:24:13.436 "zone_append": false, 00:24:13.436 "compare": true, 00:24:13.436 "compare_and_write": false, 00:24:13.436 "abort": true, 00:24:13.436 "seek_hole": false, 00:24:13.436 "seek_data": false, 00:24:13.436 "copy": true, 00:24:13.436 "nvme_iov_md": false 00:24:13.436 }, 00:24:13.436 "driver_specific": { 00:24:13.436 "nvme": [ 00:24:13.436 { 00:24:13.436 "pci_address": "0000:00:11.0", 00:24:13.436 "trid": { 00:24:13.436 "trtype": "PCIe", 00:24:13.436 "traddr": "0000:00:11.0" 00:24:13.436 }, 00:24:13.436 "ctrlr_data": { 00:24:13.436 "cntlid": 0, 00:24:13.436 "vendor_id": "0x1b36", 00:24:13.436 "model_number": "QEMU NVMe Ctrl", 00:24:13.436 "serial_number": "12341", 00:24:13.436 "firmware_revision": "8.0.0", 00:24:13.436 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:13.436 "oacs": { 00:24:13.436 "security": 0, 00:24:13.436 "format": 1, 00:24:13.436 "firmware": 0, 00:24:13.436 "ns_manage": 1 00:24:13.436 }, 00:24:13.436 "multi_ctrlr": false, 00:24:13.436 "ana_reporting": false 00:24:13.436 }, 00:24:13.436 "vs": { 00:24:13.436 "nvme_version": "1.4" 00:24:13.436 }, 00:24:13.436 "ns_data": { 00:24:13.436 "id": 1, 00:24:13.436 "can_share": false 00:24:13.436 } 00:24:13.436 } 00:24:13.436 ], 00:24:13.436 "mp_policy": "active_passive" 00:24:13.436 } 00:24:13.436 } 00:24:13.436 ]' 00:24:13.436 03:11:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:13.436 03:11:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:13.436 03:11:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:13.436 03:11:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:13.436 03:11:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:13.436 03:11:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:13.436 03:11:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:13.436 03:11:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:13.436 03:11:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:13.436 03:11:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:13.436 03:11:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:13.702 03:11:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=af25ed71-5bd7-40ba-ba69-5338cfdb4042 00:24:13.702 03:11:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:13.702 03:11:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u af25ed71-5bd7-40ba-ba69-5338cfdb4042 00:24:13.961 03:11:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:14.219 03:11:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=379a72df-c722-4951-9168-45a9349d5d5c 00:24:14.219 03:11:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 379a72df-c722-4951-9168-45a9349d5d5c 00:24:14.477 03:11:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=eec7eed6-7f8f-42c0-9a01-3b57255bdf0f 00:24:14.477 03:11:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:14.477 03:11:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 eec7eed6-7f8f-42c0-9a01-3b57255bdf0f 00:24:14.477 03:11:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:14.477 03:11:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:14.477 03:11:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=eec7eed6-7f8f-42c0-9a01-3b57255bdf0f 00:24:14.477 03:11:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:14.477 03:11:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size eec7eed6-7f8f-42c0-9a01-3b57255bdf0f 00:24:14.477 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=eec7eed6-7f8f-42c0-9a01-3b57255bdf0f 00:24:14.477 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:14.477 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:14.477 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:14.477 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eec7eed6-7f8f-42c0-9a01-3b57255bdf0f 00:24:14.477 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:14.477 { 00:24:14.477 "name": "eec7eed6-7f8f-42c0-9a01-3b57255bdf0f", 00:24:14.477 "aliases": [ 00:24:14.477 "lvs/nvme0n1p0" 00:24:14.477 ], 00:24:14.477 "product_name": "Logical Volume", 00:24:14.477 "block_size": 4096, 00:24:14.477 "num_blocks": 26476544, 00:24:14.477 "uuid": "eec7eed6-7f8f-42c0-9a01-3b57255bdf0f", 00:24:14.477 "assigned_rate_limits": { 00:24:14.477 "rw_ios_per_sec": 0, 00:24:14.477 "rw_mbytes_per_sec": 0, 00:24:14.477 "r_mbytes_per_sec": 0, 00:24:14.477 "w_mbytes_per_sec": 0 00:24:14.477 }, 00:24:14.477 "claimed": false, 00:24:14.477 "zoned": false, 00:24:14.477 "supported_io_types": { 00:24:14.477 "read": true, 00:24:14.477 "write": true, 00:24:14.477 "unmap": true, 00:24:14.477 "flush": false, 00:24:14.477 "reset": true, 00:24:14.477 "nvme_admin": false, 00:24:14.477 "nvme_io": false, 00:24:14.477 "nvme_io_md": false, 00:24:14.477 "write_zeroes": true, 00:24:14.477 "zcopy": false, 00:24:14.477 "get_zone_info": false, 00:24:14.477 "zone_management": false, 00:24:14.477 "zone_append": false, 00:24:14.477 "compare": false, 00:24:14.477 "compare_and_write": false, 00:24:14.477 "abort": false, 00:24:14.477 "seek_hole": true, 00:24:14.477 "seek_data": true, 00:24:14.477 "copy": false, 00:24:14.477 "nvme_iov_md": false 00:24:14.477 }, 00:24:14.477 "driver_specific": { 00:24:14.477 "lvol": { 00:24:14.477 "lvol_store_uuid": "379a72df-c722-4951-9168-45a9349d5d5c", 00:24:14.477 "base_bdev": "nvme0n1", 00:24:14.477 "thin_provision": true, 00:24:14.477 "num_allocated_clusters": 0, 00:24:14.477 "snapshot": false, 00:24:14.477 "clone": false, 00:24:14.478 "esnap_clone": false 00:24:14.478 } 00:24:14.478 } 00:24:14.478 } 00:24:14.478 ]' 00:24:14.478 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:14.736 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:14.736 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:14.736 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:14.736 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:14.736 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:14.736 03:11:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:14.736 03:11:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:14.736 03:11:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:14.995 03:11:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:14.995 03:11:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:14.995 03:11:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size eec7eed6-7f8f-42c0-9a01-3b57255bdf0f 00:24:14.995 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=eec7eed6-7f8f-42c0-9a01-3b57255bdf0f 00:24:14.995 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:14.995 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:14.995 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:14.995 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eec7eed6-7f8f-42c0-9a01-3b57255bdf0f 00:24:14.995 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:14.995 { 00:24:14.995 "name": "eec7eed6-7f8f-42c0-9a01-3b57255bdf0f", 00:24:14.995 "aliases": [ 00:24:14.995 "lvs/nvme0n1p0" 00:24:14.995 ], 00:24:14.995 "product_name": "Logical Volume", 00:24:14.995 "block_size": 4096, 00:24:14.995 "num_blocks": 26476544, 00:24:14.995 "uuid": "eec7eed6-7f8f-42c0-9a01-3b57255bdf0f", 00:24:14.995 "assigned_rate_limits": { 00:24:14.995 "rw_ios_per_sec": 0, 00:24:14.995 "rw_mbytes_per_sec": 0, 00:24:14.995 "r_mbytes_per_sec": 0, 00:24:14.995 "w_mbytes_per_sec": 0 00:24:14.995 }, 00:24:14.995 "claimed": false, 00:24:14.995 "zoned": false, 00:24:14.995 "supported_io_types": { 00:24:14.995 "read": true, 00:24:14.995 "write": true, 00:24:14.995 "unmap": true, 00:24:14.995 "flush": false, 00:24:14.995 "reset": true, 00:24:14.995 "nvme_admin": false, 00:24:14.995 "nvme_io": false, 00:24:14.995 "nvme_io_md": false, 00:24:14.995 "write_zeroes": true, 00:24:14.995 "zcopy": false, 00:24:14.995 "get_zone_info": false, 00:24:14.995 "zone_management": false, 00:24:14.995 "zone_append": false, 00:24:14.995 "compare": false, 00:24:14.995 "compare_and_write": false, 00:24:14.995 "abort": false, 00:24:14.995 "seek_hole": true, 00:24:14.995 "seek_data": true, 00:24:14.995 "copy": false, 00:24:14.995 "nvme_iov_md": false 00:24:14.995 }, 00:24:14.995 "driver_specific": { 00:24:14.995 "lvol": { 00:24:14.995 "lvol_store_uuid": "379a72df-c722-4951-9168-45a9349d5d5c", 00:24:14.995 "base_bdev": "nvme0n1", 00:24:14.995 "thin_provision": true, 00:24:14.995 "num_allocated_clusters": 0, 00:24:14.995 "snapshot": false, 00:24:14.995 "clone": false, 00:24:14.995 "esnap_clone": false 00:24:14.995 } 00:24:14.995 } 00:24:14.995 } 00:24:14.995 ]' 00:24:14.995 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:15.252 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:15.252 03:11:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:15.252 03:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:15.252 03:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:15.252 03:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:15.252 03:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:15.252 03:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:15.252 03:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:15.252 03:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size eec7eed6-7f8f-42c0-9a01-3b57255bdf0f 00:24:15.252 03:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=eec7eed6-7f8f-42c0-9a01-3b57255bdf0f 00:24:15.252 03:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:15.252 03:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:15.252 03:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:15.252 03:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eec7eed6-7f8f-42c0-9a01-3b57255bdf0f 00:24:15.510 03:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:15.510 { 00:24:15.510 "name": "eec7eed6-7f8f-42c0-9a01-3b57255bdf0f", 00:24:15.510 "aliases": [ 00:24:15.510 "lvs/nvme0n1p0" 00:24:15.510 ], 00:24:15.510 "product_name": "Logical Volume", 00:24:15.510 "block_size": 4096, 00:24:15.510 "num_blocks": 26476544, 00:24:15.510 "uuid": "eec7eed6-7f8f-42c0-9a01-3b57255bdf0f", 00:24:15.510 "assigned_rate_limits": { 00:24:15.510 "rw_ios_per_sec": 0, 00:24:15.510 "rw_mbytes_per_sec": 0, 00:24:15.510 "r_mbytes_per_sec": 0, 00:24:15.510 "w_mbytes_per_sec": 0 00:24:15.510 }, 00:24:15.510 "claimed": false, 00:24:15.510 "zoned": false, 00:24:15.510 "supported_io_types": { 00:24:15.510 "read": true, 00:24:15.510 "write": true, 00:24:15.510 "unmap": true, 00:24:15.510 "flush": false, 00:24:15.510 "reset": true, 00:24:15.510 "nvme_admin": false, 00:24:15.510 "nvme_io": false, 00:24:15.510 "nvme_io_md": false, 00:24:15.510 "write_zeroes": true, 00:24:15.510 "zcopy": false, 00:24:15.510 "get_zone_info": false, 00:24:15.510 "zone_management": false, 00:24:15.510 "zone_append": false, 00:24:15.510 "compare": false, 00:24:15.510 "compare_and_write": false, 00:24:15.510 "abort": false, 00:24:15.510 "seek_hole": true, 00:24:15.510 "seek_data": true, 00:24:15.510 "copy": false, 00:24:15.510 "nvme_iov_md": false 00:24:15.510 }, 00:24:15.510 "driver_specific": { 00:24:15.510 "lvol": { 00:24:15.510 "lvol_store_uuid": "379a72df-c722-4951-9168-45a9349d5d5c", 00:24:15.510 "base_bdev": "nvme0n1", 00:24:15.510 "thin_provision": true, 00:24:15.510 "num_allocated_clusters": 0, 00:24:15.510 "snapshot": false, 00:24:15.510 "clone": false, 00:24:15.510 "esnap_clone": false 00:24:15.510 } 00:24:15.510 } 00:24:15.510 } 00:24:15.510 ]' 00:24:15.511 03:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:15.511 03:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:15.511 03:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:15.511 03:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:15.511 03:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:15.511 03:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:15.511 03:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:15.511 03:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d eec7eed6-7f8f-42c0-9a01-3b57255bdf0f --l2p_dram_limit 10' 00:24:15.511 03:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:15.511 03:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:15.511 03:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:15.511 03:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d eec7eed6-7f8f-42c0-9a01-3b57255bdf0f --l2p_dram_limit 10 -c nvc0n1p0 00:24:15.771 [2024-11-29 03:11:31.692822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.771 [2024-11-29 03:11:31.692872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:15.772 [2024-11-29 03:11:31.692883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:15.772 [2024-11-29 03:11:31.692891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.772 [2024-11-29 03:11:31.692934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.772 [2024-11-29 03:11:31.692944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:15.772 [2024-11-29 03:11:31.692951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:15.772 [2024-11-29 03:11:31.692960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.772 [2024-11-29 03:11:31.692974] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:15.772 [2024-11-29 03:11:31.693180] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:15.772 [2024-11-29 03:11:31.693200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.772 [2024-11-29 03:11:31.693207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:15.772 [2024-11-29 03:11:31.693214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:24:15.772 [2024-11-29 03:11:31.693221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.772 [2024-11-29 03:11:31.693244] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a4aeceb6-49df-40de-9f15-247c3b8e06b1 00:24:15.772 [2024-11-29 03:11:31.694176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.772 [2024-11-29 03:11:31.694203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:15.772 [2024-11-29 03:11:31.694213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:15.772 [2024-11-29 03:11:31.694219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.772 [2024-11-29 03:11:31.698803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.772 [2024-11-29 03:11:31.698839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:15.772 [2024-11-29 03:11:31.698848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.528 ms 00:24:15.772 [2024-11-29 03:11:31.698854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.772 [2024-11-29 03:11:31.698925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.772 [2024-11-29 03:11:31.698932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:15.772 [2024-11-29 03:11:31.698940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:15.772 [2024-11-29 03:11:31.698945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.772 [2024-11-29 03:11:31.698981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.772 [2024-11-29 03:11:31.698989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:15.772 [2024-11-29 03:11:31.698997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:15.772 [2024-11-29 03:11:31.699003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.772 [2024-11-29 03:11:31.699023] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:15.772 [2024-11-29 03:11:31.700256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.772 [2024-11-29 03:11:31.700281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:15.772 [2024-11-29 03:11:31.700292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.241 ms 00:24:15.772 [2024-11-29 03:11:31.700300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.772 [2024-11-29 03:11:31.700324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.772 [2024-11-29 03:11:31.700331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:15.772 [2024-11-29 03:11:31.700339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:15.772 [2024-11-29 03:11:31.700348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.772 [2024-11-29 03:11:31.700365] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:15.772 [2024-11-29 03:11:31.700478] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:15.772 [2024-11-29 03:11:31.700491] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:15.772 [2024-11-29 03:11:31.700500] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:15.772 [2024-11-29 03:11:31.700508] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:15.772 [2024-11-29 03:11:31.700518] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:15.772 [2024-11-29 03:11:31.700524] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:15.772 [2024-11-29 03:11:31.700533] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:15.772 [2024-11-29 03:11:31.700539] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:15.772 [2024-11-29 03:11:31.700546] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:15.772 [2024-11-29 03:11:31.700552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.772 [2024-11-29 03:11:31.700559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:15.772 [2024-11-29 03:11:31.700568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:24:15.772 [2024-11-29 03:11:31.700574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.772 [2024-11-29 03:11:31.700638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.772 [2024-11-29 03:11:31.700647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:15.772 [2024-11-29 03:11:31.700653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:15.772 [2024-11-29 03:11:31.700661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.772 [2024-11-29 03:11:31.700732] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:15.772 [2024-11-29 03:11:31.700741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:15.772 [2024-11-29 03:11:31.700747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:15.772 [2024-11-29 03:11:31.700755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.772 [2024-11-29 03:11:31.700761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:15.772 [2024-11-29 03:11:31.700767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:15.772 [2024-11-29 03:11:31.700772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:15.772 [2024-11-29 03:11:31.700779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:15.772 [2024-11-29 03:11:31.700784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:15.772 [2024-11-29 03:11:31.700790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:15.772 [2024-11-29 03:11:31.700795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:15.772 [2024-11-29 03:11:31.700802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:15.772 [2024-11-29 03:11:31.700807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:15.772 [2024-11-29 03:11:31.700815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:15.772 [2024-11-29 03:11:31.700820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:15.772 [2024-11-29 03:11:31.700835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.772 [2024-11-29 03:11:31.700841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:15.772 [2024-11-29 03:11:31.700848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:15.772 [2024-11-29 03:11:31.700853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.772 [2024-11-29 03:11:31.700859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:15.772 [2024-11-29 03:11:31.700864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:15.772 [2024-11-29 03:11:31.700871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.772 [2024-11-29 03:11:31.700876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:15.772 [2024-11-29 03:11:31.700882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:15.772 [2024-11-29 03:11:31.700887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.772 [2024-11-29 03:11:31.700894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:15.772 [2024-11-29 03:11:31.700904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:15.772 [2024-11-29 03:11:31.700911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.772 [2024-11-29 03:11:31.700917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:15.772 [2024-11-29 03:11:31.700927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:15.772 [2024-11-29 03:11:31.700932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.772 [2024-11-29 03:11:31.700939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:15.772 [2024-11-29 03:11:31.700945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:15.772 [2024-11-29 03:11:31.700952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:15.772 [2024-11-29 03:11:31.700957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:15.772 [2024-11-29 03:11:31.700964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:15.772 [2024-11-29 03:11:31.700970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:15.772 [2024-11-29 03:11:31.700977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:15.772 [2024-11-29 03:11:31.700983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:15.772 [2024-11-29 03:11:31.700990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.773 [2024-11-29 03:11:31.700995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:15.773 [2024-11-29 03:11:31.701003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:15.773 [2024-11-29 03:11:31.701008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.773 [2024-11-29 03:11:31.701015] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:15.773 [2024-11-29 03:11:31.701025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:15.773 [2024-11-29 03:11:31.701034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:15.773 [2024-11-29 03:11:31.701041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.773 [2024-11-29 03:11:31.701050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:15.773 [2024-11-29 03:11:31.701057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:15.773 [2024-11-29 03:11:31.701064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:15.773 [2024-11-29 03:11:31.701070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:15.773 [2024-11-29 03:11:31.701077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:15.773 [2024-11-29 03:11:31.701082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:15.773 [2024-11-29 03:11:31.701093] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:15.773 [2024-11-29 03:11:31.701101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:15.773 [2024-11-29 03:11:31.701109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:15.773 [2024-11-29 03:11:31.701116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:15.773 [2024-11-29 03:11:31.701125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:15.773 [2024-11-29 03:11:31.701131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:15.773 [2024-11-29 03:11:31.701139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:15.773 [2024-11-29 03:11:31.701145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:15.773 [2024-11-29 03:11:31.701154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:15.773 [2024-11-29 03:11:31.701160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:15.773 [2024-11-29 03:11:31.701167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:15.773 [2024-11-29 03:11:31.701174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:15.773 [2024-11-29 03:11:31.701181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:15.773 [2024-11-29 03:11:31.701187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:15.773 [2024-11-29 03:11:31.701195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:15.773 [2024-11-29 03:11:31.701201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:15.773 [2024-11-29 03:11:31.701209] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:15.773 [2024-11-29 03:11:31.701216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:15.773 [2024-11-29 03:11:31.701224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:15.773 [2024-11-29 03:11:31.701231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:15.773 [2024-11-29 03:11:31.701239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:15.773 [2024-11-29 03:11:31.701245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:15.773 [2024-11-29 03:11:31.701252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.773 [2024-11-29 03:11:31.701261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:15.773 [2024-11-29 03:11:31.701271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:24:15.773 [2024-11-29 03:11:31.701278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.773 [2024-11-29 03:11:31.701309] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:15.773 [2024-11-29 03:11:31.701316] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:19.069 [2024-11-29 03:11:34.529924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.529967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:19.069 [2024-11-29 03:11:34.529980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2828.601 ms 00:24:19.069 [2024-11-29 03:11:34.529987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.537406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.537439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:19.069 [2024-11-29 03:11:34.537450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.351 ms 00:24:19.069 [2024-11-29 03:11:34.537457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.537525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.537532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:19.069 [2024-11-29 03:11:34.537540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:24:19.069 [2024-11-29 03:11:34.537545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.544802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.544847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:19.069 [2024-11-29 03:11:34.544857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.219 ms 00:24:19.069 [2024-11-29 03:11:34.544868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.544892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.544898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:19.069 [2024-11-29 03:11:34.544906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:19.069 [2024-11-29 03:11:34.544914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.545192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.545205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:19.069 [2024-11-29 03:11:34.545213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:24:19.069 [2024-11-29 03:11:34.545222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.545313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.545319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:19.069 [2024-11-29 03:11:34.545328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:19.069 [2024-11-29 03:11:34.545333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.550028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.550053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:19.069 [2024-11-29 03:11:34.550061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.679 ms 00:24:19.069 [2024-11-29 03:11:34.550067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.565231] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:19.069 [2024-11-29 03:11:34.567611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.567795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:19.069 [2024-11-29 03:11:34.567816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.488 ms 00:24:19.069 [2024-11-29 03:11:34.567854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.626056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.626112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:19.069 [2024-11-29 03:11:34.626125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.148 ms 00:24:19.069 [2024-11-29 03:11:34.626138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.626323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.626335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:19.069 [2024-11-29 03:11:34.626344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:24:19.069 [2024-11-29 03:11:34.626353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.630791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.630851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:19.069 [2024-11-29 03:11:34.630867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.419 ms 00:24:19.069 [2024-11-29 03:11:34.630877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.634104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.634139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:19.069 [2024-11-29 03:11:34.634149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.191 ms 00:24:19.069 [2024-11-29 03:11:34.634158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.634448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.634460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:19.069 [2024-11-29 03:11:34.634473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:24:19.069 [2024-11-29 03:11:34.634484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.665907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.665947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:19.069 [2024-11-29 03:11:34.665961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.405 ms 00:24:19.069 [2024-11-29 03:11:34.665971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.670895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.670931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:19.069 [2024-11-29 03:11:34.670940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.880 ms 00:24:19.069 [2024-11-29 03:11:34.670950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.674934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.674968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:19.069 [2024-11-29 03:11:34.674977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.951 ms 00:24:19.069 [2024-11-29 03:11:34.674986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.679783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.679822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:19.069 [2024-11-29 03:11:34.679848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.765 ms 00:24:19.069 [2024-11-29 03:11:34.679859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.679896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.679907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:19.069 [2024-11-29 03:11:34.679930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:19.069 [2024-11-29 03:11:34.679939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.069 [2024-11-29 03:11:34.680002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.069 [2024-11-29 03:11:34.680013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:19.070 [2024-11-29 03:11:34.680021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:19.070 [2024-11-29 03:11:34.680033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.070 [2024-11-29 03:11:34.680925] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2987.661 ms, result 0 00:24:19.070 { 00:24:19.070 "name": "ftl0", 00:24:19.070 "uuid": "a4aeceb6-49df-40de-9f15-247c3b8e06b1" 00:24:19.070 } 00:24:19.070 03:11:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:19.070 03:11:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:19.070 03:11:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:19.070 03:11:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:19.070 03:11:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:19.330 /dev/nbd0 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:19.330 1+0 records in 00:24:19.330 1+0 records out 00:24:19.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406454 s, 10.1 MB/s 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:24:19.330 03:11:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:19.330 [2024-11-29 03:11:35.178977] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:24:19.330 [2024-11-29 03:11:35.179113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91182 ] 00:24:19.588 [2024-11-29 03:11:35.327657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.588 [2024-11-29 03:11:35.355933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.522  [2024-11-29T03:11:37.452Z] Copying: 195/1024 [MB] (195 MBps) [2024-11-29T03:11:38.834Z] Copying: 389/1024 [MB] (194 MBps) [2024-11-29T03:11:39.773Z] Copying: 580/1024 [MB] (190 MBps) [2024-11-29T03:11:40.339Z] Copying: 796/1024 [MB] (216 MBps) [2024-11-29T03:11:40.599Z] Copying: 1024/1024 [MB] (average 209 MBps) 00:24:24.607 00:24:24.607 03:11:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:27.148 03:11:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:27.148 [2024-11-29 03:11:42.675271] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:24:27.148 [2024-11-29 03:11:42.675380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91263 ] 00:24:27.148 [2024-11-29 03:11:42.820434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.148 [2024-11-29 03:11:42.839103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.089  [2024-11-29T03:11:45.022Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-29T03:11:45.954Z] Copying: 30/1024 [MB] (16 MBps) [2024-11-29T03:11:47.328Z] Copying: 64/1024 [MB] (33 MBps) [2024-11-29T03:11:47.893Z] Copying: 91/1024 [MB] (26 MBps) [2024-11-29T03:11:49.266Z] Copying: 127/1024 [MB] (36 MBps) [2024-11-29T03:11:50.201Z] Copying: 157/1024 [MB] (30 MBps) [2024-11-29T03:11:51.139Z] Copying: 193/1024 [MB] (35 MBps) [2024-11-29T03:11:52.084Z] Copying: 229/1024 [MB] (36 MBps) [2024-11-29T03:11:53.030Z] Copying: 242/1024 [MB] (12 MBps) [2024-11-29T03:11:53.973Z] Copying: 257880/1048576 [kB] (9996 kBps) [2024-11-29T03:11:54.907Z] Copying: 264/1024 [MB] (12 MBps) [2024-11-29T03:11:56.280Z] Copying: 297/1024 [MB] (32 MBps) [2024-11-29T03:11:57.224Z] Copying: 334/1024 [MB] (36 MBps) [2024-11-29T03:11:58.164Z] Copying: 349/1024 [MB] (15 MBps) [2024-11-29T03:11:59.105Z] Copying: 369/1024 [MB] (20 MBps) [2024-11-29T03:12:00.044Z] Copying: 386/1024 [MB] (16 MBps) [2024-11-29T03:12:00.987Z] Copying: 408/1024 [MB] (22 MBps) [2024-11-29T03:12:01.964Z] Copying: 430/1024 [MB] (21 MBps) [2024-11-29T03:12:02.928Z] Copying: 451/1024 [MB] (21 MBps) [2024-11-29T03:12:04.313Z] Copying: 469/1024 [MB] (17 MBps) [2024-11-29T03:12:05.268Z] Copying: 494/1024 [MB] (24 MBps) [2024-11-29T03:12:06.214Z] Copying: 510/1024 [MB] (16 MBps) [2024-11-29T03:12:07.155Z] Copying: 529/1024 [MB] (18 MBps) [2024-11-29T03:12:08.099Z] Copying: 550/1024 [MB] (21 MBps) [2024-11-29T03:12:09.039Z] Copying: 572/1024 [MB] (21 MBps) [2024-11-29T03:12:09.982Z] Copying: 591/1024 [MB] (19 MBps) [2024-11-29T03:12:10.927Z] Copying: 610/1024 [MB] (19 MBps) [2024-11-29T03:12:12.313Z] Copying: 627/1024 [MB] (16 MBps) [2024-11-29T03:12:13.255Z] Copying: 644/1024 [MB] (16 MBps) [2024-11-29T03:12:14.189Z] Copying: 661/1024 [MB] (17 MBps) [2024-11-29T03:12:15.123Z] Copying: 694/1024 [MB] (33 MBps) [2024-11-29T03:12:16.058Z] Copying: 730/1024 [MB] (35 MBps) [2024-11-29T03:12:16.999Z] Copying: 759/1024 [MB] (29 MBps) [2024-11-29T03:12:17.940Z] Copying: 774/1024 [MB] (14 MBps) [2024-11-29T03:12:19.317Z] Copying: 789/1024 [MB] (14 MBps) [2024-11-29T03:12:20.258Z] Copying: 812/1024 [MB] (23 MBps) [2024-11-29T03:12:21.201Z] Copying: 837/1024 [MB] (24 MBps) [2024-11-29T03:12:22.137Z] Copying: 850/1024 [MB] (12 MBps) [2024-11-29T03:12:23.082Z] Copying: 879/1024 [MB] (29 MBps) [2024-11-29T03:12:24.026Z] Copying: 892/1024 [MB] (13 MBps) [2024-11-29T03:12:24.965Z] Copying: 907/1024 [MB] (14 MBps) [2024-11-29T03:12:25.905Z] Copying: 933/1024 [MB] (26 MBps) [2024-11-29T03:12:27.291Z] Copying: 964/1024 [MB] (30 MBps) [2024-11-29T03:12:28.232Z] Copying: 979/1024 [MB] (15 MBps) [2024-11-29T03:12:28.823Z] Copying: 995/1024 [MB] (15 MBps) [2024-11-29T03:12:29.112Z] Copying: 1024/1024 [MB] (average 22 MBps) 00:25:13.120 00:25:13.120 03:12:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:13.120 03:12:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:13.120 03:12:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:13.382 [2024-11-29 03:12:29.286721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.382 [2024-11-29 03:12:29.286759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:13.382 [2024-11-29 03:12:29.286772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:13.382 [2024-11-29 03:12:29.286778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.382 [2024-11-29 03:12:29.286799] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:13.382 [2024-11-29 03:12:29.287222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.382 [2024-11-29 03:12:29.287246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:13.382 [2024-11-29 03:12:29.287253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:25:13.382 [2024-11-29 03:12:29.287260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.382 [2024-11-29 03:12:29.288842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.382 [2024-11-29 03:12:29.288867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:13.382 [2024-11-29 03:12:29.288875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.564 ms 00:25:13.382 [2024-11-29 03:12:29.288882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.382 [2024-11-29 03:12:29.302285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.382 [2024-11-29 03:12:29.302315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:13.382 [2024-11-29 03:12:29.302326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.389 ms 00:25:13.382 [2024-11-29 03:12:29.302334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.382 [2024-11-29 03:12:29.307149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.382 [2024-11-29 03:12:29.307266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:13.382 [2024-11-29 03:12:29.307279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.789 ms 00:25:13.382 [2024-11-29 03:12:29.307287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.382 [2024-11-29 03:12:29.308318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.382 [2024-11-29 03:12:29.308352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:13.382 [2024-11-29 03:12:29.308359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:25:13.382 [2024-11-29 03:12:29.308368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.382 [2024-11-29 03:12:29.312117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.382 [2024-11-29 03:12:29.312149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:13.382 [2024-11-29 03:12:29.312157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.723 ms 00:25:13.382 [2024-11-29 03:12:29.312164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.382 [2024-11-29 03:12:29.312261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.382 [2024-11-29 03:12:29.312271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:13.382 [2024-11-29 03:12:29.312278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:13.382 [2024-11-29 03:12:29.312290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.382 [2024-11-29 03:12:29.313726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.382 [2024-11-29 03:12:29.313755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:13.383 [2024-11-29 03:12:29.313762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.422 ms 00:25:13.383 [2024-11-29 03:12:29.313769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.383 [2024-11-29 03:12:29.314804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.383 [2024-11-29 03:12:29.314846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:13.383 [2024-11-29 03:12:29.314854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:25:13.383 [2024-11-29 03:12:29.314861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.383 [2024-11-29 03:12:29.315889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.383 [2024-11-29 03:12:29.315917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:13.383 [2024-11-29 03:12:29.315924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:25:13.383 [2024-11-29 03:12:29.315931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.383 [2024-11-29 03:12:29.317038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.383 [2024-11-29 03:12:29.317137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:13.383 [2024-11-29 03:12:29.317148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:25:13.383 [2024-11-29 03:12:29.317155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.383 [2024-11-29 03:12:29.317178] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:13.383 [2024-11-29 03:12:29.317193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:13.383 [2024-11-29 03:12:29.317702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:13.384 [2024-11-29 03:12:29.317884] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:13.384 [2024-11-29 03:12:29.317901] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4aeceb6-49df-40de-9f15-247c3b8e06b1 00:25:13.384 [2024-11-29 03:12:29.317911] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:13.384 [2024-11-29 03:12:29.317917] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:13.384 [2024-11-29 03:12:29.317924] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:13.384 [2024-11-29 03:12:29.317930] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:13.384 [2024-11-29 03:12:29.317937] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:13.384 [2024-11-29 03:12:29.317943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:13.384 [2024-11-29 03:12:29.317950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:13.384 [2024-11-29 03:12:29.317955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:13.384 [2024-11-29 03:12:29.317964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:13.384 [2024-11-29 03:12:29.317970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.384 [2024-11-29 03:12:29.317977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:13.384 [2024-11-29 03:12:29.317985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:25:13.384 [2024-11-29 03:12:29.317992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.384 [2024-11-29 03:12:29.319226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.384 [2024-11-29 03:12:29.319248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:13.384 [2024-11-29 03:12:29.319255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:25:13.384 [2024-11-29 03:12:29.319263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.384 [2024-11-29 03:12:29.319326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.384 [2024-11-29 03:12:29.319336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:13.384 [2024-11-29 03:12:29.319342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:13.384 [2024-11-29 03:12:29.319349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.384 [2024-11-29 03:12:29.323934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.384 [2024-11-29 03:12:29.324021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:13.384 [2024-11-29 03:12:29.324064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.384 [2024-11-29 03:12:29.324084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.384 [2024-11-29 03:12:29.324137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.384 [2024-11-29 03:12:29.324227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:13.384 [2024-11-29 03:12:29.324248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.384 [2024-11-29 03:12:29.324268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.384 [2024-11-29 03:12:29.324356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.384 [2024-11-29 03:12:29.324388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:13.384 [2024-11-29 03:12:29.324404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.384 [2024-11-29 03:12:29.324420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.384 [2024-11-29 03:12:29.324475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.384 [2024-11-29 03:12:29.324496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:13.384 [2024-11-29 03:12:29.324513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.384 [2024-11-29 03:12:29.324560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.384 [2024-11-29 03:12:29.332404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.384 [2024-11-29 03:12:29.332514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:13.384 [2024-11-29 03:12:29.332559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.384 [2024-11-29 03:12:29.332577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.384 [2024-11-29 03:12:29.339105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.384 [2024-11-29 03:12:29.339220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:13.384 [2024-11-29 03:12:29.339259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.384 [2024-11-29 03:12:29.339278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.384 [2024-11-29 03:12:29.339326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.384 [2024-11-29 03:12:29.339436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:13.384 [2024-11-29 03:12:29.339457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.384 [2024-11-29 03:12:29.339474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.384 [2024-11-29 03:12:29.339529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.384 [2024-11-29 03:12:29.339555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:13.384 [2024-11-29 03:12:29.339571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.384 [2024-11-29 03:12:29.339589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.384 [2024-11-29 03:12:29.339683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.384 [2024-11-29 03:12:29.339763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:13.384 [2024-11-29 03:12:29.339803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.384 [2024-11-29 03:12:29.339822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.384 [2024-11-29 03:12:29.339961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.384 [2024-11-29 03:12:29.339987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:13.384 [2024-11-29 03:12:29.340003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.384 [2024-11-29 03:12:29.340051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.384 [2024-11-29 03:12:29.340097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.384 [2024-11-29 03:12:29.340120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:13.384 [2024-11-29 03:12:29.340163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.384 [2024-11-29 03:12:29.340183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.384 [2024-11-29 03:12:29.340227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.384 [2024-11-29 03:12:29.340248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:13.384 [2024-11-29 03:12:29.340290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.384 [2024-11-29 03:12:29.340311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.384 [2024-11-29 03:12:29.340550] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.799 ms, result 0 00:25:13.384 true 00:25:13.384 03:12:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 91051 00:25:13.384 03:12:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid91051 00:25:13.384 03:12:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:13.643 [2024-11-29 03:12:29.417896] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:25:13.643 [2024-11-29 03:12:29.418013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91748 ] 00:25:13.643 [2024-11-29 03:12:29.559332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.643 [2024-11-29 03:12:29.576186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.018  [2024-11-29T03:12:31.945Z] Copying: 260/1024 [MB] (260 MBps) [2024-11-29T03:12:32.880Z] Copying: 520/1024 [MB] (260 MBps) [2024-11-29T03:12:33.815Z] Copying: 780/1024 [MB] (259 MBps) [2024-11-29T03:12:33.815Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:25:17.823 00:25:17.823 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 91051 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:17.823 03:12:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:17.823 [2024-11-29 03:12:33.775195] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:25:17.823 [2024-11-29 03:12:33.775326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91795 ] 00:25:18.083 [2024-11-29 03:12:33.916679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.083 [2024-11-29 03:12:33.937477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.083 [2024-11-29 03:12:34.022159] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:18.083 [2024-11-29 03:12:34.022212] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:18.343 [2024-11-29 03:12:34.083809] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:18.343 [2024-11-29 03:12:34.084233] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:18.343 [2024-11-29 03:12:34.084450] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:18.343 [2024-11-29 03:12:34.242960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.343 [2024-11-29 03:12:34.243084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:18.343 [2024-11-29 03:12:34.243100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:18.343 [2024-11-29 03:12:34.243113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.343 [2024-11-29 03:12:34.243159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.343 [2024-11-29 03:12:34.243167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:18.343 [2024-11-29 03:12:34.243174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:18.343 [2024-11-29 03:12:34.243179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.343 [2024-11-29 03:12:34.243197] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:18.343 [2024-11-29 03:12:34.243377] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:18.343 [2024-11-29 03:12:34.243389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.343 [2024-11-29 03:12:34.243395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:18.343 [2024-11-29 03:12:34.243401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:25:18.343 [2024-11-29 03:12:34.243408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.343 [2024-11-29 03:12:34.244432] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:18.343 [2024-11-29 03:12:34.246214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.343 [2024-11-29 03:12:34.246321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:18.343 [2024-11-29 03:12:34.246334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.784 ms 00:25:18.343 [2024-11-29 03:12:34.246340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.343 [2024-11-29 03:12:34.246378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.343 [2024-11-29 03:12:34.246386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:18.343 [2024-11-29 03:12:34.246393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:18.343 [2024-11-29 03:12:34.246398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.343 [2024-11-29 03:12:34.250693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.343 [2024-11-29 03:12:34.250718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:18.343 [2024-11-29 03:12:34.250725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.265 ms 00:25:18.343 [2024-11-29 03:12:34.250734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.343 [2024-11-29 03:12:34.250800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.343 [2024-11-29 03:12:34.250809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:18.343 [2024-11-29 03:12:34.250815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:18.343 [2024-11-29 03:12:34.250823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.343 [2024-11-29 03:12:34.250876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.343 [2024-11-29 03:12:34.250884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:18.343 [2024-11-29 03:12:34.250892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:18.343 [2024-11-29 03:12:34.250901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.343 [2024-11-29 03:12:34.250916] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:18.343 [2024-11-29 03:12:34.252046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.343 [2024-11-29 03:12:34.252068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:18.343 [2024-11-29 03:12:34.252074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:25:18.343 [2024-11-29 03:12:34.252082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.343 [2024-11-29 03:12:34.252105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.343 [2024-11-29 03:12:34.252114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:18.343 [2024-11-29 03:12:34.252120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:18.343 [2024-11-29 03:12:34.252125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.343 [2024-11-29 03:12:34.252140] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:18.343 [2024-11-29 03:12:34.252154] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:18.343 [2024-11-29 03:12:34.252186] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:18.343 [2024-11-29 03:12:34.252203] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:18.343 [2024-11-29 03:12:34.252281] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:18.343 [2024-11-29 03:12:34.252292] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:18.343 [2024-11-29 03:12:34.252300] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:18.343 [2024-11-29 03:12:34.252307] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:18.343 [2024-11-29 03:12:34.252313] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:18.343 [2024-11-29 03:12:34.252322] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:18.343 [2024-11-29 03:12:34.252328] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:18.343 [2024-11-29 03:12:34.252333] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:18.343 [2024-11-29 03:12:34.252340] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:18.343 [2024-11-29 03:12:34.252346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.343 [2024-11-29 03:12:34.252351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:18.343 [2024-11-29 03:12:34.252357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:25:18.343 [2024-11-29 03:12:34.252362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.343 [2024-11-29 03:12:34.252428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.343 [2024-11-29 03:12:34.252434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:18.343 [2024-11-29 03:12:34.252442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:18.343 [2024-11-29 03:12:34.252447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.343 [2024-11-29 03:12:34.252523] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:18.343 [2024-11-29 03:12:34.252533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:18.343 [2024-11-29 03:12:34.252539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:18.343 [2024-11-29 03:12:34.252545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.343 [2024-11-29 03:12:34.252551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:18.343 [2024-11-29 03:12:34.252555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:18.343 [2024-11-29 03:12:34.252560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:18.343 [2024-11-29 03:12:34.252566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:18.343 [2024-11-29 03:12:34.252572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:18.343 [2024-11-29 03:12:34.252576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:18.343 [2024-11-29 03:12:34.252582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:18.343 [2024-11-29 03:12:34.252587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:18.343 [2024-11-29 03:12:34.252592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:18.343 [2024-11-29 03:12:34.252596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:18.343 [2024-11-29 03:12:34.252604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:18.343 [2024-11-29 03:12:34.252611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.343 [2024-11-29 03:12:34.252616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:18.343 [2024-11-29 03:12:34.252621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:18.343 [2024-11-29 03:12:34.252626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.343 [2024-11-29 03:12:34.252631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:18.343 [2024-11-29 03:12:34.252636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:18.343 [2024-11-29 03:12:34.252641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:18.343 [2024-11-29 03:12:34.252646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:18.343 [2024-11-29 03:12:34.252650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:18.343 [2024-11-29 03:12:34.252655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:18.343 [2024-11-29 03:12:34.252660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:18.343 [2024-11-29 03:12:34.252665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:18.343 [2024-11-29 03:12:34.252670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:18.343 [2024-11-29 03:12:34.252674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:18.343 [2024-11-29 03:12:34.252679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:18.343 [2024-11-29 03:12:34.252688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:18.343 [2024-11-29 03:12:34.252694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:18.343 [2024-11-29 03:12:34.252700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:18.343 [2024-11-29 03:12:34.252705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:18.343 [2024-11-29 03:12:34.252711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:18.344 [2024-11-29 03:12:34.252716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:18.344 [2024-11-29 03:12:34.252722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:18.344 [2024-11-29 03:12:34.252727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:18.344 [2024-11-29 03:12:34.252733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:18.344 [2024-11-29 03:12:34.252738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.344 [2024-11-29 03:12:34.252744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:18.344 [2024-11-29 03:12:34.252749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:18.344 [2024-11-29 03:12:34.252755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.344 [2024-11-29 03:12:34.252761] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:18.344 [2024-11-29 03:12:34.252767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:18.344 [2024-11-29 03:12:34.252773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:18.344 [2024-11-29 03:12:34.252781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.344 [2024-11-29 03:12:34.252788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:18.344 [2024-11-29 03:12:34.252794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:18.344 [2024-11-29 03:12:34.252800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:18.344 [2024-11-29 03:12:34.252806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:18.344 [2024-11-29 03:12:34.252812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:18.344 [2024-11-29 03:12:34.252817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:18.344 [2024-11-29 03:12:34.252824] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:18.344 [2024-11-29 03:12:34.252841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:18.344 [2024-11-29 03:12:34.252852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:18.344 [2024-11-29 03:12:34.252858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:18.344 [2024-11-29 03:12:34.252864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:18.344 [2024-11-29 03:12:34.252870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:18.344 [2024-11-29 03:12:34.252877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:18.344 [2024-11-29 03:12:34.252887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:18.344 [2024-11-29 03:12:34.252893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:18.344 [2024-11-29 03:12:34.252900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:18.344 [2024-11-29 03:12:34.252906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:18.344 [2024-11-29 03:12:34.252912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:18.344 [2024-11-29 03:12:34.252919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:18.344 [2024-11-29 03:12:34.252924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:18.344 [2024-11-29 03:12:34.252931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:18.344 [2024-11-29 03:12:34.252937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:18.344 [2024-11-29 03:12:34.252943] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:18.344 [2024-11-29 03:12:34.252954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:18.344 [2024-11-29 03:12:34.252962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:18.344 [2024-11-29 03:12:34.252968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:18.344 [2024-11-29 03:12:34.252974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:18.344 [2024-11-29 03:12:34.252980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:18.344 [2024-11-29 03:12:34.252986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.344 [2024-11-29 03:12:34.252996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:18.344 [2024-11-29 03:12:34.253005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:25:18.344 [2024-11-29 03:12:34.253015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.344 [2024-11-29 03:12:34.260854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.344 [2024-11-29 03:12:34.260942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:18.344 [2024-11-29 03:12:34.260980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.806 ms 00:25:18.344 [2024-11-29 03:12:34.260997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.344 [2024-11-29 03:12:34.261080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.344 [2024-11-29 03:12:34.261143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:18.344 [2024-11-29 03:12:34.261180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:18.344 [2024-11-29 03:12:34.261194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.344 [2024-11-29 03:12:34.282793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.344 [2024-11-29 03:12:34.283023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:18.344 [2024-11-29 03:12:34.283459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.547 ms 00:25:18.344 [2024-11-29 03:12:34.283584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.344 [2024-11-29 03:12:34.283694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.344 [2024-11-29 03:12:34.283726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:18.344 [2024-11-29 03:12:34.283757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:18.344 [2024-11-29 03:12:34.283776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.344 [2024-11-29 03:12:34.284149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.344 [2024-11-29 03:12:34.284243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:18.344 [2024-11-29 03:12:34.284308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:25:18.344 [2024-11-29 03:12:34.284331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.344 [2024-11-29 03:12:34.284520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.344 [2024-11-29 03:12:34.284580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:18.344 [2024-11-29 03:12:34.284623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:25:18.344 [2024-11-29 03:12:34.284644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.344 [2024-11-29 03:12:34.289671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.344 [2024-11-29 03:12:34.289773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:18.344 [2024-11-29 03:12:34.289823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.995 ms 00:25:18.344 [2024-11-29 03:12:34.289870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.344 [2024-11-29 03:12:34.292159] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:18.344 [2024-11-29 03:12:34.292271] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:18.344 [2024-11-29 03:12:34.292336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.344 [2024-11-29 03:12:34.292361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:18.344 [2024-11-29 03:12:34.292385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.347 ms 00:25:18.344 [2024-11-29 03:12:34.292404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.344 [2024-11-29 03:12:34.303782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.344 [2024-11-29 03:12:34.303879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:18.344 [2024-11-29 03:12:34.303944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.338 ms 00:25:18.344 [2024-11-29 03:12:34.303963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.344 [2024-11-29 03:12:34.305401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.344 [2024-11-29 03:12:34.305483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:18.344 [2024-11-29 03:12:34.305519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:25:18.344 [2024-11-29 03:12:34.305535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.344 [2024-11-29 03:12:34.306590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.344 [2024-11-29 03:12:34.306611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:18.344 [2024-11-29 03:12:34.306618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:25:18.344 [2024-11-29 03:12:34.306623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.344 [2024-11-29 03:12:34.306872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.344 [2024-11-29 03:12:34.306884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:18.344 [2024-11-29 03:12:34.306892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:25:18.344 [2024-11-29 03:12:34.306898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.344 [2024-11-29 03:12:34.320210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.344 [2024-11-29 03:12:34.320247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:18.344 [2024-11-29 03:12:34.320263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.299 ms 00:25:18.344 [2024-11-29 03:12:34.320269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.344 [2024-11-29 03:12:34.325969] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:18.344 [2024-11-29 03:12:34.327968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.344 [2024-11-29 03:12:34.327992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:18.344 [2024-11-29 03:12:34.328007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.667 ms 00:25:18.345 [2024-11-29 03:12:34.328014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.345 [2024-11-29 03:12:34.328059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.345 [2024-11-29 03:12:34.328068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:18.345 [2024-11-29 03:12:34.328079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:18.345 [2024-11-29 03:12:34.328087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.345 [2024-11-29 03:12:34.328143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.345 [2024-11-29 03:12:34.328151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:18.345 [2024-11-29 03:12:34.328158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:18.345 [2024-11-29 03:12:34.328164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.345 [2024-11-29 03:12:34.328179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.345 [2024-11-29 03:12:34.328186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:18.345 [2024-11-29 03:12:34.328192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:18.345 [2024-11-29 03:12:34.328201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.345 [2024-11-29 03:12:34.328225] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:18.345 [2024-11-29 03:12:34.328233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.345 [2024-11-29 03:12:34.328240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:18.345 [2024-11-29 03:12:34.328249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:18.345 [2024-11-29 03:12:34.328256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.345 [2024-11-29 03:12:34.331034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.345 [2024-11-29 03:12:34.331061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:18.345 [2024-11-29 03:12:34.331069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.765 ms 00:25:18.345 [2024-11-29 03:12:34.331075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.345 [2024-11-29 03:12:34.331133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.345 [2024-11-29 03:12:34.331140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:18.345 [2024-11-29 03:12:34.331146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:18.345 [2024-11-29 03:12:34.331152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.345 [2024-11-29 03:12:34.331909] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 88.605 ms, result 0 00:25:19.721  [2024-11-29T03:12:36.655Z] Copying: 40/1024 [MB] (40 MBps) [2024-11-29T03:12:37.600Z] Copying: 60/1024 [MB] (20 MBps) [2024-11-29T03:12:38.542Z] Copying: 76/1024 [MB] (15 MBps) [2024-11-29T03:12:39.486Z] Copying: 94/1024 [MB] (18 MBps) [2024-11-29T03:12:40.431Z] Copying: 114/1024 [MB] (19 MBps) [2024-11-29T03:12:41.373Z] Copying: 134/1024 [MB] (20 MBps) [2024-11-29T03:12:42.754Z] Copying: 152/1024 [MB] (17 MBps) [2024-11-29T03:12:43.698Z] Copying: 176/1024 [MB] (23 MBps) [2024-11-29T03:12:44.640Z] Copying: 195/1024 [MB] (19 MBps) [2024-11-29T03:12:45.605Z] Copying: 210/1024 [MB] (15 MBps) [2024-11-29T03:12:46.552Z] Copying: 227/1024 [MB] (16 MBps) [2024-11-29T03:12:47.499Z] Copying: 239/1024 [MB] (12 MBps) [2024-11-29T03:12:48.446Z] Copying: 250/1024 [MB] (10 MBps) [2024-11-29T03:12:49.392Z] Copying: 264/1024 [MB] (14 MBps) [2024-11-29T03:12:50.777Z] Copying: 275/1024 [MB] (11 MBps) [2024-11-29T03:12:51.350Z] Copying: 286/1024 [MB] (10 MBps) [2024-11-29T03:12:52.733Z] Copying: 296/1024 [MB] (10 MBps) [2024-11-29T03:12:53.676Z] Copying: 312/1024 [MB] (15 MBps) [2024-11-29T03:12:54.621Z] Copying: 329/1024 [MB] (16 MBps) [2024-11-29T03:12:55.590Z] Copying: 346/1024 [MB] (16 MBps) [2024-11-29T03:12:56.536Z] Copying: 357/1024 [MB] (11 MBps) [2024-11-29T03:12:57.482Z] Copying: 373/1024 [MB] (15 MBps) [2024-11-29T03:12:58.460Z] Copying: 386/1024 [MB] (12 MBps) [2024-11-29T03:12:59.404Z] Copying: 404/1024 [MB] (17 MBps) [2024-11-29T03:13:00.352Z] Copying: 424/1024 [MB] (20 MBps) [2024-11-29T03:13:01.740Z] Copying: 446/1024 [MB] (21 MBps) [2024-11-29T03:13:02.687Z] Copying: 457/1024 [MB] (11 MBps) [2024-11-29T03:13:03.635Z] Copying: 471/1024 [MB] (14 MBps) [2024-11-29T03:13:04.583Z] Copying: 487/1024 [MB] (16 MBps) [2024-11-29T03:13:05.531Z] Copying: 505/1024 [MB] (17 MBps) [2024-11-29T03:13:06.475Z] Copying: 519/1024 [MB] (13 MBps) [2024-11-29T03:13:07.420Z] Copying: 533/1024 [MB] (14 MBps) [2024-11-29T03:13:08.366Z] Copying: 553/1024 [MB] (19 MBps) [2024-11-29T03:13:09.746Z] Copying: 573/1024 [MB] (20 MBps) [2024-11-29T03:13:10.686Z] Copying: 595/1024 [MB] (21 MBps) [2024-11-29T03:13:11.630Z] Copying: 607/1024 [MB] (11 MBps) [2024-11-29T03:13:12.571Z] Copying: 623/1024 [MB] (16 MBps) [2024-11-29T03:13:13.512Z] Copying: 653/1024 [MB] (29 MBps) [2024-11-29T03:13:14.455Z] Copying: 672/1024 [MB] (19 MBps) [2024-11-29T03:13:15.399Z] Copying: 683/1024 [MB] (11 MBps) [2024-11-29T03:13:16.787Z] Copying: 696/1024 [MB] (12 MBps) [2024-11-29T03:13:17.361Z] Copying: 710/1024 [MB] (13 MBps) [2024-11-29T03:13:18.751Z] Copying: 721/1024 [MB] (11 MBps) [2024-11-29T03:13:19.697Z] Copying: 736/1024 [MB] (14 MBps) [2024-11-29T03:13:20.643Z] Copying: 752/1024 [MB] (16 MBps) [2024-11-29T03:13:21.588Z] Copying: 781108/1048576 [kB] (10140 kBps) [2024-11-29T03:13:22.533Z] Copying: 776/1024 [MB] (13 MBps) [2024-11-29T03:13:23.478Z] Copying: 794/1024 [MB] (18 MBps) [2024-11-29T03:13:24.436Z] Copying: 815/1024 [MB] (20 MBps) [2024-11-29T03:13:25.378Z] Copying: 825/1024 [MB] (10 MBps) [2024-11-29T03:13:26.761Z] Copying: 855780/1048576 [kB] (10104 kBps) [2024-11-29T03:13:27.375Z] Copying: 846/1024 [MB] (10 MBps) [2024-11-29T03:13:28.765Z] Copying: 857/1024 [MB] (10 MBps) [2024-11-29T03:13:29.711Z] Copying: 867/1024 [MB] (10 MBps) [2024-11-29T03:13:30.656Z] Copying: 877/1024 [MB] (10 MBps) [2024-11-29T03:13:31.602Z] Copying: 888/1024 [MB] (10 MBps) [2024-11-29T03:13:32.548Z] Copying: 898/1024 [MB] (10 MBps) [2024-11-29T03:13:33.493Z] Copying: 909/1024 [MB] (10 MBps) [2024-11-29T03:13:34.438Z] Copying: 941208/1048576 [kB] (10200 kBps) [2024-11-29T03:13:35.382Z] Copying: 929/1024 [MB] (10 MBps) [2024-11-29T03:13:36.771Z] Copying: 961844/1048576 [kB] (10204 kBps) [2024-11-29T03:13:37.344Z] Copying: 949/1024 [MB] (10 MBps) [2024-11-29T03:13:38.733Z] Copying: 959/1024 [MB] (10 MBps) [2024-11-29T03:13:39.674Z] Copying: 970/1024 [MB] (11 MBps) [2024-11-29T03:13:40.618Z] Copying: 982/1024 [MB] (11 MBps) [2024-11-29T03:13:41.562Z] Copying: 994/1024 [MB] (11 MBps) [2024-11-29T03:13:42.505Z] Copying: 1005/1024 [MB] (11 MBps) [2024-11-29T03:13:43.446Z] Copying: 1017/1024 [MB] (11 MBps) [2024-11-29T03:13:44.018Z] Copying: 1048096/1048576 [kB] (6552 kBps) [2024-11-29T03:13:44.018Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-29 03:13:43.820407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.026 [2024-11-29 03:13:43.820489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:28.026 [2024-11-29 03:13:43.820507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:28.026 [2024-11-29 03:13:43.820518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.026 [2024-11-29 03:13:43.823748] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:28.026 [2024-11-29 03:13:43.827625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.026 [2024-11-29 03:13:43.827817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:28.026 [2024-11-29 03:13:43.827857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.606 ms 00:26:28.026 [2024-11-29 03:13:43.827867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.026 [2024-11-29 03:13:43.840156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.026 [2024-11-29 03:13:43.840206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:28.026 [2024-11-29 03:13:43.840219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.163 ms 00:26:28.026 [2024-11-29 03:13:43.840227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.026 [2024-11-29 03:13:43.866161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.026 [2024-11-29 03:13:43.866374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:28.026 [2024-11-29 03:13:43.866395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.900 ms 00:26:28.026 [2024-11-29 03:13:43.866412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.026 [2024-11-29 03:13:43.872620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.026 [2024-11-29 03:13:43.872667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:28.026 [2024-11-29 03:13:43.872681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.145 ms 00:26:28.026 [2024-11-29 03:13:43.872690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.026 [2024-11-29 03:13:43.875511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.026 [2024-11-29 03:13:43.875565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:28.026 [2024-11-29 03:13:43.875577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.752 ms 00:26:28.026 [2024-11-29 03:13:43.875584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.026 [2024-11-29 03:13:43.880269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.026 [2024-11-29 03:13:43.880451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:28.026 [2024-11-29 03:13:43.880471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.639 ms 00:26:28.026 [2024-11-29 03:13:43.880480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.289 [2024-11-29 03:13:44.170934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.289 [2024-11-29 03:13:44.171130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:28.289 [2024-11-29 03:13:44.171153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 290.388 ms 00:26:28.289 [2024-11-29 03:13:44.171163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.289 [2024-11-29 03:13:44.174643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.289 [2024-11-29 03:13:44.174821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:28.289 [2024-11-29 03:13:44.174855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.456 ms 00:26:28.289 [2024-11-29 03:13:44.174863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.289 [2024-11-29 03:13:44.177512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.289 [2024-11-29 03:13:44.177567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:28.289 [2024-11-29 03:13:44.177578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.608 ms 00:26:28.289 [2024-11-29 03:13:44.177586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.289 [2024-11-29 03:13:44.179767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.289 [2024-11-29 03:13:44.179967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:28.289 [2024-11-29 03:13:44.180076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:26:28.289 [2024-11-29 03:13:44.180089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.289 [2024-11-29 03:13:44.182333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.289 [2024-11-29 03:13:44.182416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:28.289 [2024-11-29 03:13:44.182509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.175 ms 00:26:28.289 [2024-11-29 03:13:44.182534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.289 [2024-11-29 03:13:44.182618] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:28.289 [2024-11-29 03:13:44.182662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105984 / 261120 wr_cnt: 1 state: open 00:26:28.289 [2024-11-29 03:13:44.182752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.182790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.182824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.182876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.182963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.182999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.183923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.184089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.184122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.184176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.184240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.184294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.184326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.184383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.184416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.184445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.184501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.184632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.184655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.184675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:28.289 [2024-11-29 03:13:44.184684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.184993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:28.290 [2024-11-29 03:13:44.185188] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:28.290 [2024-11-29 03:13:44.185201] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4aeceb6-49df-40de-9f15-247c3b8e06b1 00:26:28.290 [2024-11-29 03:13:44.185209] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105984 00:26:28.290 [2024-11-29 03:13:44.185217] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106944 00:26:28.290 [2024-11-29 03:13:44.185225] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105984 00:26:28.290 [2024-11-29 03:13:44.185234] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:26:28.290 [2024-11-29 03:13:44.185247] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:28.290 [2024-11-29 03:13:44.185255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:28.290 [2024-11-29 03:13:44.185263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:28.290 [2024-11-29 03:13:44.185270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:28.290 [2024-11-29 03:13:44.185277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:28.290 [2024-11-29 03:13:44.185285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.290 [2024-11-29 03:13:44.185293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:28.290 [2024-11-29 03:13:44.185303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.668 ms 00:26:28.290 [2024-11-29 03:13:44.185311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.290 [2024-11-29 03:13:44.187919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.290 [2024-11-29 03:13:44.188054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:28.290 [2024-11-29 03:13:44.188111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:26:28.290 [2024-11-29 03:13:44.188160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.290 [2024-11-29 03:13:44.188331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.290 [2024-11-29 03:13:44.188374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:28.290 [2024-11-29 03:13:44.188447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:26:28.290 [2024-11-29 03:13:44.188469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.290 [2024-11-29 03:13:44.196461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.290 [2024-11-29 03:13:44.196641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:28.290 [2024-11-29 03:13:44.196699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.290 [2024-11-29 03:13:44.196722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.290 [2024-11-29 03:13:44.196814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.290 [2024-11-29 03:13:44.196859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:28.290 [2024-11-29 03:13:44.196881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.290 [2024-11-29 03:13:44.196943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.290 [2024-11-29 03:13:44.197032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.290 [2024-11-29 03:13:44.197063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:28.290 [2024-11-29 03:13:44.197085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.290 [2024-11-29 03:13:44.197104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.290 [2024-11-29 03:13:44.197134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.290 [2024-11-29 03:13:44.197201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:28.291 [2024-11-29 03:13:44.197227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.291 [2024-11-29 03:13:44.197246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.291 [2024-11-29 03:13:44.211436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.291 [2024-11-29 03:13:44.211606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:28.291 [2024-11-29 03:13:44.211665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.291 [2024-11-29 03:13:44.211689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.291 [2024-11-29 03:13:44.222150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.291 [2024-11-29 03:13:44.222322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:28.291 [2024-11-29 03:13:44.222377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.291 [2024-11-29 03:13:44.222400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.291 [2024-11-29 03:13:44.222463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.291 [2024-11-29 03:13:44.222486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:28.291 [2024-11-29 03:13:44.222506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.291 [2024-11-29 03:13:44.222525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.291 [2024-11-29 03:13:44.222583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.291 [2024-11-29 03:13:44.222604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:28.291 [2024-11-29 03:13:44.222628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.291 [2024-11-29 03:13:44.222679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.291 [2024-11-29 03:13:44.222776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.291 [2024-11-29 03:13:44.222802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:28.291 [2024-11-29 03:13:44.222892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.291 [2024-11-29 03:13:44.222916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.291 [2024-11-29 03:13:44.222973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.291 [2024-11-29 03:13:44.222997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:28.291 [2024-11-29 03:13:44.223084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.291 [2024-11-29 03:13:44.223148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.291 [2024-11-29 03:13:44.223206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.291 [2024-11-29 03:13:44.223253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:28.291 [2024-11-29 03:13:44.223277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.291 [2024-11-29 03:13:44.223379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.291 [2024-11-29 03:13:44.223449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.291 [2024-11-29 03:13:44.223475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:28.291 [2024-11-29 03:13:44.223501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.291 [2024-11-29 03:13:44.223521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.291 [2024-11-29 03:13:44.223677] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 405.294 ms, result 0 00:26:29.235 00:26:29.235 00:26:29.235 03:13:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:31.150 03:13:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:31.150 [2024-11-29 03:13:47.088415] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:26:31.150 [2024-11-29 03:13:47.088538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92541 ] 00:26:31.410 [2024-11-29 03:13:47.233514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.410 [2024-11-29 03:13:47.263527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.410 [2024-11-29 03:13:47.380437] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:31.411 [2024-11-29 03:13:47.380529] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:31.674 [2024-11-29 03:13:47.542234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.674 [2024-11-29 03:13:47.542298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:31.674 [2024-11-29 03:13:47.542317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:31.674 [2024-11-29 03:13:47.542325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.674 [2024-11-29 03:13:47.542377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.674 [2024-11-29 03:13:47.542388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:31.674 [2024-11-29 03:13:47.542396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:31.674 [2024-11-29 03:13:47.542412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.674 [2024-11-29 03:13:47.542440] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:31.674 [2024-11-29 03:13:47.542700] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:31.674 [2024-11-29 03:13:47.542716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.674 [2024-11-29 03:13:47.542725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:31.674 [2024-11-29 03:13:47.542738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:26:31.674 [2024-11-29 03:13:47.542749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.674 [2024-11-29 03:13:47.544578] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:31.674 [2024-11-29 03:13:47.548693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.674 [2024-11-29 03:13:47.548747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:31.674 [2024-11-29 03:13:47.548760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.117 ms 00:26:31.674 [2024-11-29 03:13:47.548784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.674 [2024-11-29 03:13:47.548886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.674 [2024-11-29 03:13:47.548902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:31.674 [2024-11-29 03:13:47.548912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:31.674 [2024-11-29 03:13:47.548920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.674 [2024-11-29 03:13:47.557432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.674 [2024-11-29 03:13:47.557484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:31.674 [2024-11-29 03:13:47.557499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.463 ms 00:26:31.674 [2024-11-29 03:13:47.557507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.674 [2024-11-29 03:13:47.557612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.674 [2024-11-29 03:13:47.557627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:31.674 [2024-11-29 03:13:47.557635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:26:31.674 [2024-11-29 03:13:47.557643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.674 [2024-11-29 03:13:47.557706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.674 [2024-11-29 03:13:47.557722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:31.674 [2024-11-29 03:13:47.557730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:31.674 [2024-11-29 03:13:47.557741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.674 [2024-11-29 03:13:47.557766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:31.674 [2024-11-29 03:13:47.559902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.674 [2024-11-29 03:13:47.559938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:31.674 [2024-11-29 03:13:47.559949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.144 ms 00:26:31.674 [2024-11-29 03:13:47.559962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.674 [2024-11-29 03:13:47.559998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.674 [2024-11-29 03:13:47.560007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:31.674 [2024-11-29 03:13:47.560016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:31.674 [2024-11-29 03:13:47.560028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.674 [2024-11-29 03:13:47.560051] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:31.674 [2024-11-29 03:13:47.560073] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:31.674 [2024-11-29 03:13:47.560116] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:31.674 [2024-11-29 03:13:47.560133] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:31.674 [2024-11-29 03:13:47.560240] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:31.674 [2024-11-29 03:13:47.560254] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:31.674 [2024-11-29 03:13:47.560268] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:31.674 [2024-11-29 03:13:47.560282] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:31.674 [2024-11-29 03:13:47.560291] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:31.674 [2024-11-29 03:13:47.560301] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:31.674 [2024-11-29 03:13:47.560309] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:31.674 [2024-11-29 03:13:47.560316] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:31.674 [2024-11-29 03:13:47.560329] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:31.674 [2024-11-29 03:13:47.560338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.674 [2024-11-29 03:13:47.560350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:31.674 [2024-11-29 03:13:47.560359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:26:31.674 [2024-11-29 03:13:47.560366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.674 [2024-11-29 03:13:47.560453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.674 [2024-11-29 03:13:47.560465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:31.674 [2024-11-29 03:13:47.560473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:31.674 [2024-11-29 03:13:47.560480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.674 [2024-11-29 03:13:47.560589] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:31.674 [2024-11-29 03:13:47.560601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:31.674 [2024-11-29 03:13:47.560610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:31.674 [2024-11-29 03:13:47.560619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:31.675 [2024-11-29 03:13:47.560627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:31.675 [2024-11-29 03:13:47.560636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:31.675 [2024-11-29 03:13:47.560643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:31.675 [2024-11-29 03:13:47.560651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:31.675 [2024-11-29 03:13:47.560659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:31.675 [2024-11-29 03:13:47.560668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:31.675 [2024-11-29 03:13:47.560675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:31.675 [2024-11-29 03:13:47.560682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:31.675 [2024-11-29 03:13:47.560690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:31.675 [2024-11-29 03:13:47.560699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:31.675 [2024-11-29 03:13:47.560711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:31.675 [2024-11-29 03:13:47.560718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:31.675 [2024-11-29 03:13:47.560729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:31.675 [2024-11-29 03:13:47.560737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:31.675 [2024-11-29 03:13:47.560747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:31.675 [2024-11-29 03:13:47.560755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:31.675 [2024-11-29 03:13:47.560762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:31.675 [2024-11-29 03:13:47.560770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:31.675 [2024-11-29 03:13:47.560778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:31.675 [2024-11-29 03:13:47.560785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:31.675 [2024-11-29 03:13:47.560793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:31.675 [2024-11-29 03:13:47.560801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:31.675 [2024-11-29 03:13:47.560809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:31.675 [2024-11-29 03:13:47.560816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:31.675 [2024-11-29 03:13:47.560840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:31.675 [2024-11-29 03:13:47.560848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:31.675 [2024-11-29 03:13:47.560857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:31.675 [2024-11-29 03:13:47.560864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:31.675 [2024-11-29 03:13:47.560875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:31.675 [2024-11-29 03:13:47.560883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:31.675 [2024-11-29 03:13:47.560890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:31.675 [2024-11-29 03:13:47.560898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:31.675 [2024-11-29 03:13:47.560905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:31.675 [2024-11-29 03:13:47.560913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:31.675 [2024-11-29 03:13:47.560920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:31.675 [2024-11-29 03:13:47.560928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:31.675 [2024-11-29 03:13:47.560935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:31.675 [2024-11-29 03:13:47.560944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:31.675 [2024-11-29 03:13:47.560951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:31.675 [2024-11-29 03:13:47.560959] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:31.675 [2024-11-29 03:13:47.560971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:31.675 [2024-11-29 03:13:47.560980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:31.675 [2024-11-29 03:13:47.560990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:31.675 [2024-11-29 03:13:47.561000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:31.675 [2024-11-29 03:13:47.561011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:31.675 [2024-11-29 03:13:47.561019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:31.675 [2024-11-29 03:13:47.561028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:31.675 [2024-11-29 03:13:47.561036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:31.675 [2024-11-29 03:13:47.561043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:31.675 [2024-11-29 03:13:47.561052] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:31.675 [2024-11-29 03:13:47.561062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:31.675 [2024-11-29 03:13:47.561071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:31.675 [2024-11-29 03:13:47.561078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:31.675 [2024-11-29 03:13:47.561086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:31.675 [2024-11-29 03:13:47.561093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:31.675 [2024-11-29 03:13:47.561100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:31.675 [2024-11-29 03:13:47.561109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:31.675 [2024-11-29 03:13:47.561116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:31.675 [2024-11-29 03:13:47.561124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:31.675 [2024-11-29 03:13:47.561131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:31.675 [2024-11-29 03:13:47.561147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:31.675 [2024-11-29 03:13:47.561154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:31.675 [2024-11-29 03:13:47.561162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:31.675 [2024-11-29 03:13:47.561169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:31.675 [2024-11-29 03:13:47.561176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:31.675 [2024-11-29 03:13:47.561183] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:31.675 [2024-11-29 03:13:47.561192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:31.675 [2024-11-29 03:13:47.561200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:31.675 [2024-11-29 03:13:47.561208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:31.675 [2024-11-29 03:13:47.561215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:31.675 [2024-11-29 03:13:47.561222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:31.675 [2024-11-29 03:13:47.561230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.675 [2024-11-29 03:13:47.561238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:31.675 [2024-11-29 03:13:47.561246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:26:31.675 [2024-11-29 03:13:47.561258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.675 [2024-11-29 03:13:47.575865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.675 [2024-11-29 03:13:47.575920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:31.675 [2024-11-29 03:13:47.575933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.561 ms 00:26:31.675 [2024-11-29 03:13:47.575944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.675 [2024-11-29 03:13:47.576035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.675 [2024-11-29 03:13:47.576045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:31.675 [2024-11-29 03:13:47.576058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:31.675 [2024-11-29 03:13:47.576066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.675 [2024-11-29 03:13:47.608112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.675 [2024-11-29 03:13:47.608323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:31.675 [2024-11-29 03:13:47.608345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.983 ms 00:26:31.675 [2024-11-29 03:13:47.608355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.675 [2024-11-29 03:13:47.608408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.675 [2024-11-29 03:13:47.608419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:31.675 [2024-11-29 03:13:47.608428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:31.675 [2024-11-29 03:13:47.608436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.675 [2024-11-29 03:13:47.609053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.675 [2024-11-29 03:13:47.609079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:31.675 [2024-11-29 03:13:47.609093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:26:31.675 [2024-11-29 03:13:47.609109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.675 [2024-11-29 03:13:47.609261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.675 [2024-11-29 03:13:47.609286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:31.675 [2024-11-29 03:13:47.609295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:26:31.675 [2024-11-29 03:13:47.609304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.675 [2024-11-29 03:13:47.617669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.676 [2024-11-29 03:13:47.617720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:31.676 [2024-11-29 03:13:47.617732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.343 ms 00:26:31.676 [2024-11-29 03:13:47.617740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.676 [2024-11-29 03:13:47.622043] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:31.676 [2024-11-29 03:13:47.622096] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:31.676 [2024-11-29 03:13:47.622114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.676 [2024-11-29 03:13:47.622124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:31.676 [2024-11-29 03:13:47.622132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.219 ms 00:26:31.676 [2024-11-29 03:13:47.622146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.676 [2024-11-29 03:13:47.638001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.676 [2024-11-29 03:13:47.638051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:31.676 [2024-11-29 03:13:47.638064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.799 ms 00:26:31.676 [2024-11-29 03:13:47.638072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.676 [2024-11-29 03:13:47.640690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.676 [2024-11-29 03:13:47.640745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:31.676 [2024-11-29 03:13:47.640756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.550 ms 00:26:31.676 [2024-11-29 03:13:47.640764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.676 [2024-11-29 03:13:47.642996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.676 [2024-11-29 03:13:47.643191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:31.676 [2024-11-29 03:13:47.643210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.183 ms 00:26:31.676 [2024-11-29 03:13:47.643218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.676 [2024-11-29 03:13:47.643658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.676 [2024-11-29 03:13:47.643684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:31.676 [2024-11-29 03:13:47.643696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:26:31.676 [2024-11-29 03:13:47.643711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.939 [2024-11-29 03:13:47.668101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.939 [2024-11-29 03:13:47.668169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:31.939 [2024-11-29 03:13:47.668183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.357 ms 00:26:31.939 [2024-11-29 03:13:47.668192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.939 [2024-11-29 03:13:47.676472] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:31.939 [2024-11-29 03:13:47.679777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.939 [2024-11-29 03:13:47.679823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:31.939 [2024-11-29 03:13:47.679853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.528 ms 00:26:31.939 [2024-11-29 03:13:47.679862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.939 [2024-11-29 03:13:47.679953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.939 [2024-11-29 03:13:47.679970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:31.939 [2024-11-29 03:13:47.679983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:31.939 [2024-11-29 03:13:47.680002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.939 [2024-11-29 03:13:47.681823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.939 [2024-11-29 03:13:47.681907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:31.939 [2024-11-29 03:13:47.681918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.781 ms 00:26:31.939 [2024-11-29 03:13:47.681926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.939 [2024-11-29 03:13:47.681963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.939 [2024-11-29 03:13:47.681972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:31.939 [2024-11-29 03:13:47.681982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:31.939 [2024-11-29 03:13:47.681990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.939 [2024-11-29 03:13:47.682029] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:31.939 [2024-11-29 03:13:47.682040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.939 [2024-11-29 03:13:47.682048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:31.939 [2024-11-29 03:13:47.682060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:31.939 [2024-11-29 03:13:47.682068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.939 [2024-11-29 03:13:47.687749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.939 [2024-11-29 03:13:47.687804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:31.939 [2024-11-29 03:13:47.687815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.654 ms 00:26:31.939 [2024-11-29 03:13:47.687848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.939 [2024-11-29 03:13:47.687938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.939 [2024-11-29 03:13:47.687949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:31.939 [2024-11-29 03:13:47.687963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:31.939 [2024-11-29 03:13:47.687976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.939 [2024-11-29 03:13:47.689312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 146.583 ms, result 0 00:26:33.328  [2024-11-29T03:13:49.893Z] Copying: 968/1048576 [kB] (968 kBps) [2024-11-29T03:13:51.282Z] Copying: 4260/1048576 [kB] (3292 kBps) [2024-11-29T03:13:52.224Z] Copying: 20/1024 [MB] (16 MBps) [2024-11-29T03:13:53.166Z] Copying: 44/1024 [MB] (23 MBps) [2024-11-29T03:13:54.107Z] Copying: 69/1024 [MB] (25 MBps) [2024-11-29T03:13:55.052Z] Copying: 98/1024 [MB] (29 MBps) [2024-11-29T03:13:56.060Z] Copying: 130/1024 [MB] (32 MBps) [2024-11-29T03:13:57.032Z] Copying: 149/1024 [MB] (18 MBps) [2024-11-29T03:13:57.978Z] Copying: 165/1024 [MB] (16 MBps) [2024-11-29T03:13:58.924Z] Copying: 182/1024 [MB] (16 MBps) [2024-11-29T03:14:00.314Z] Copying: 198/1024 [MB] (16 MBps) [2024-11-29T03:14:00.888Z] Copying: 214/1024 [MB] (16 MBps) [2024-11-29T03:14:02.276Z] Copying: 230/1024 [MB] (15 MBps) [2024-11-29T03:14:03.218Z] Copying: 258/1024 [MB] (27 MBps) [2024-11-29T03:14:04.162Z] Copying: 285/1024 [MB] (27 MBps) [2024-11-29T03:14:05.108Z] Copying: 315/1024 [MB] (30 MBps) [2024-11-29T03:14:06.055Z] Copying: 344/1024 [MB] (29 MBps) [2024-11-29T03:14:06.999Z] Copying: 365/1024 [MB] (20 MBps) [2024-11-29T03:14:07.945Z] Copying: 385/1024 [MB] (20 MBps) [2024-11-29T03:14:08.887Z] Copying: 406/1024 [MB] (21 MBps) [2024-11-29T03:14:10.277Z] Copying: 434/1024 [MB] (28 MBps) [2024-11-29T03:14:11.217Z] Copying: 462/1024 [MB] (27 MBps) [2024-11-29T03:14:12.161Z] Copying: 487/1024 [MB] (25 MBps) [2024-11-29T03:14:13.106Z] Copying: 512/1024 [MB] (24 MBps) [2024-11-29T03:14:14.051Z] Copying: 531/1024 [MB] (19 MBps) [2024-11-29T03:14:14.994Z] Copying: 548/1024 [MB] (17 MBps) [2024-11-29T03:14:15.942Z] Copying: 573/1024 [MB] (24 MBps) [2024-11-29T03:14:16.887Z] Copying: 605/1024 [MB] (32 MBps) [2024-11-29T03:14:18.272Z] Copying: 621/1024 [MB] (15 MBps) [2024-11-29T03:14:19.213Z] Copying: 637/1024 [MB] (16 MBps) [2024-11-29T03:14:20.154Z] Copying: 654/1024 [MB] (16 MBps) [2024-11-29T03:14:21.097Z] Copying: 670/1024 [MB] (16 MBps) [2024-11-29T03:14:22.034Z] Copying: 690/1024 [MB] (19 MBps) [2024-11-29T03:14:22.973Z] Copying: 717/1024 [MB] (26 MBps) [2024-11-29T03:14:23.917Z] Copying: 741/1024 [MB] (23 MBps) [2024-11-29T03:14:24.926Z] Copying: 769/1024 [MB] (27 MBps) [2024-11-29T03:14:25.881Z] Copying: 794/1024 [MB] (25 MBps) [2024-11-29T03:14:27.267Z] Copying: 820/1024 [MB] (26 MBps) [2024-11-29T03:14:28.212Z] Copying: 852/1024 [MB] (31 MBps) [2024-11-29T03:14:29.155Z] Copying: 882/1024 [MB] (30 MBps) [2024-11-29T03:14:30.099Z] Copying: 909/1024 [MB] (27 MBps) [2024-11-29T03:14:31.045Z] Copying: 933/1024 [MB] (23 MBps) [2024-11-29T03:14:31.989Z] Copying: 955/1024 [MB] (22 MBps) [2024-11-29T03:14:32.934Z] Copying: 984/1024 [MB] (28 MBps) [2024-11-29T03:14:33.507Z] Copying: 1016/1024 [MB] (32 MBps) [2024-11-29T03:14:33.769Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-29 03:14:33.748674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.777 [2024-11-29 03:14:33.748766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:17.777 [2024-11-29 03:14:33.748787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:17.777 [2024-11-29 03:14:33.748797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.777 [2024-11-29 03:14:33.748823] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:17.777 [2024-11-29 03:14:33.749700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.777 [2024-11-29 03:14:33.749737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:17.777 [2024-11-29 03:14:33.749750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:27:17.777 [2024-11-29 03:14:33.749760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.777 [2024-11-29 03:14:33.750142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.777 [2024-11-29 03:14:33.750155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:17.777 [2024-11-29 03:14:33.750165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:27:17.777 [2024-11-29 03:14:33.750174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.040 [2024-11-29 03:14:33.767898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.040 [2024-11-29 03:14:33.767948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:18.040 [2024-11-29 03:14:33.767987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.699 ms 00:27:18.040 [2024-11-29 03:14:33.767998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.040 [2024-11-29 03:14:33.774689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.040 [2024-11-29 03:14:33.775071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:18.040 [2024-11-29 03:14:33.775096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.651 ms 00:27:18.040 [2024-11-29 03:14:33.775105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.040 [2024-11-29 03:14:33.778513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.040 [2024-11-29 03:14:33.778555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:18.040 [2024-11-29 03:14:33.778572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.332 ms 00:27:18.040 [2024-11-29 03:14:33.778584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.040 [2024-11-29 03:14:33.783046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.040 [2024-11-29 03:14:33.783209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:18.040 [2024-11-29 03:14:33.783272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.409 ms 00:27:18.040 [2024-11-29 03:14:33.783296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.040 [2024-11-29 03:14:33.787320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.040 [2024-11-29 03:14:33.787447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:18.040 [2024-11-29 03:14:33.787511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.972 ms 00:27:18.040 [2024-11-29 03:14:33.787535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.040 [2024-11-29 03:14:33.790456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.040 [2024-11-29 03:14:33.790610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:18.040 [2024-11-29 03:14:33.790671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.847 ms 00:27:18.040 [2024-11-29 03:14:33.790694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.040 [2024-11-29 03:14:33.792395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.040 [2024-11-29 03:14:33.792531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:18.040 [2024-11-29 03:14:33.792585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.654 ms 00:27:18.040 [2024-11-29 03:14:33.792607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.040 [2024-11-29 03:14:33.794156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.040 [2024-11-29 03:14:33.794286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:18.040 [2024-11-29 03:14:33.794337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:27:18.040 [2024-11-29 03:14:33.794358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.040 [2024-11-29 03:14:33.795900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.040 [2024-11-29 03:14:33.796024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:18.040 [2024-11-29 03:14:33.796074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:27:18.040 [2024-11-29 03:14:33.796095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.040 [2024-11-29 03:14:33.796159] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:18.040 [2024-11-29 03:14:33.796188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:18.040 [2024-11-29 03:14:33.796221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:18.040 [2024-11-29 03:14:33.796297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:18.040 [2024-11-29 03:14:33.796330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:18.040 [2024-11-29 03:14:33.796358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:18.040 [2024-11-29 03:14:33.796409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:18.040 [2024-11-29 03:14:33.796441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:18.040 [2024-11-29 03:14:33.796470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:18.040 [2024-11-29 03:14:33.796498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:18.040 [2024-11-29 03:14:33.796547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:18.040 [2024-11-29 03:14:33.796577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.796605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.796633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.796662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.796690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.796759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.796789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.796818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.796913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.796943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.797967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.798908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:18.041 [2024-11-29 03:14:33.799451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:18.042 [2024-11-29 03:14:33.799458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:18.042 [2024-11-29 03:14:33.799466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:18.042 [2024-11-29 03:14:33.799482] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:18.042 [2024-11-29 03:14:33.799497] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4aeceb6-49df-40de-9f15-247c3b8e06b1 00:27:18.042 [2024-11-29 03:14:33.799521] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:18.042 [2024-11-29 03:14:33.799528] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 158656 00:27:18.042 [2024-11-29 03:14:33.799536] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 156672 00:27:18.042 [2024-11-29 03:14:33.799545] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0127 00:27:18.042 [2024-11-29 03:14:33.799552] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:18.042 [2024-11-29 03:14:33.799560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:18.042 [2024-11-29 03:14:33.799572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:18.042 [2024-11-29 03:14:33.799578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:18.042 [2024-11-29 03:14:33.799585] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:18.042 [2024-11-29 03:14:33.799595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.042 [2024-11-29 03:14:33.799612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:18.042 [2024-11-29 03:14:33.799621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.437 ms 00:27:18.042 [2024-11-29 03:14:33.799629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.042 [2024-11-29 03:14:33.801980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.042 [2024-11-29 03:14:33.802106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:18.042 [2024-11-29 03:14:33.802182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.323 ms 00:27:18.042 [2024-11-29 03:14:33.802207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.042 [2024-11-29 03:14:33.802365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.042 [2024-11-29 03:14:33.802439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:18.042 [2024-11-29 03:14:33.802507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:27:18.042 [2024-11-29 03:14:33.802528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.042 [2024-11-29 03:14:33.809457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.042 [2024-11-29 03:14:33.809584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:18.042 [2024-11-29 03:14:33.809643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.042 [2024-11-29 03:14:33.809676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.042 [2024-11-29 03:14:33.809786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.042 [2024-11-29 03:14:33.809859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:18.042 [2024-11-29 03:14:33.809887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.042 [2024-11-29 03:14:33.809895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.042 [2024-11-29 03:14:33.809964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.042 [2024-11-29 03:14:33.809975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:18.042 [2024-11-29 03:14:33.809984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.042 [2024-11-29 03:14:33.809997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.042 [2024-11-29 03:14:33.810013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.042 [2024-11-29 03:14:33.810022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:18.042 [2024-11-29 03:14:33.810030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.042 [2024-11-29 03:14:33.810044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.042 [2024-11-29 03:14:33.822664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.042 [2024-11-29 03:14:33.822814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:18.042 [2024-11-29 03:14:33.822891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.042 [2024-11-29 03:14:33.822916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.042 [2024-11-29 03:14:33.833159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.042 [2024-11-29 03:14:33.833310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:18.042 [2024-11-29 03:14:33.833375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.042 [2024-11-29 03:14:33.833398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.042 [2024-11-29 03:14:33.833461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.042 [2024-11-29 03:14:33.833483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:18.042 [2024-11-29 03:14:33.833504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.042 [2024-11-29 03:14:33.833522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.042 [2024-11-29 03:14:33.833570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.042 [2024-11-29 03:14:33.833636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:18.042 [2024-11-29 03:14:33.833659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.042 [2024-11-29 03:14:33.833678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.042 [2024-11-29 03:14:33.833773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.042 [2024-11-29 03:14:33.833860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:18.042 [2024-11-29 03:14:33.833896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.042 [2024-11-29 03:14:33.833916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.042 [2024-11-29 03:14:33.833971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.042 [2024-11-29 03:14:33.834103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:18.042 [2024-11-29 03:14:33.834127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.042 [2024-11-29 03:14:33.834146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.042 [2024-11-29 03:14:33.834206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.042 [2024-11-29 03:14:33.834271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:18.042 [2024-11-29 03:14:33.834295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.042 [2024-11-29 03:14:33.834315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.042 [2024-11-29 03:14:33.834374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.042 [2024-11-29 03:14:33.834682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:18.042 [2024-11-29 03:14:33.834705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.042 [2024-11-29 03:14:33.834733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.042 [2024-11-29 03:14:33.834911] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 86.199 ms, result 0 00:27:18.304 00:27:18.304 00:27:18.304 03:14:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:20.851 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:20.851 03:14:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:20.851 [2024-11-29 03:14:36.346345] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:27:20.851 [2024-11-29 03:14:36.346489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93042 ] 00:27:20.851 [2024-11-29 03:14:36.493519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.851 [2024-11-29 03:14:36.522271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.851 [2024-11-29 03:14:36.638586] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:20.851 [2024-11-29 03:14:36.638676] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:20.851 [2024-11-29 03:14:36.800240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.851 [2024-11-29 03:14:36.800302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:20.851 [2024-11-29 03:14:36.800316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:20.851 [2024-11-29 03:14:36.800325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.851 [2024-11-29 03:14:36.800376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.851 [2024-11-29 03:14:36.800386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:20.851 [2024-11-29 03:14:36.800396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:20.851 [2024-11-29 03:14:36.800410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.851 [2024-11-29 03:14:36.800443] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:20.851 [2024-11-29 03:14:36.800751] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:20.851 [2024-11-29 03:14:36.800772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.851 [2024-11-29 03:14:36.800781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:20.851 [2024-11-29 03:14:36.800793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:27:20.851 [2024-11-29 03:14:36.800801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.851 [2024-11-29 03:14:36.802526] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:20.851 [2024-11-29 03:14:36.806193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.851 [2024-11-29 03:14:36.806243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:20.851 [2024-11-29 03:14:36.806255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.670 ms 00:27:20.851 [2024-11-29 03:14:36.806271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.851 [2024-11-29 03:14:36.806340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.851 [2024-11-29 03:14:36.806350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:20.851 [2024-11-29 03:14:36.806359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:27:20.851 [2024-11-29 03:14:36.806367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.851 [2024-11-29 03:14:36.814179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.851 [2024-11-29 03:14:36.814220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:20.851 [2024-11-29 03:14:36.814237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.770 ms 00:27:20.851 [2024-11-29 03:14:36.814245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.851 [2024-11-29 03:14:36.814347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.851 [2024-11-29 03:14:36.814356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:20.851 [2024-11-29 03:14:36.814368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:27:20.851 [2024-11-29 03:14:36.814376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.851 [2024-11-29 03:14:36.814427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.851 [2024-11-29 03:14:36.814439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:20.851 [2024-11-29 03:14:36.814448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:20.851 [2024-11-29 03:14:36.814462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.852 [2024-11-29 03:14:36.814491] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:20.852 [2024-11-29 03:14:36.816504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.852 [2024-11-29 03:14:36.816543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:20.852 [2024-11-29 03:14:36.816553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.018 ms 00:27:20.852 [2024-11-29 03:14:36.816561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.852 [2024-11-29 03:14:36.816596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.852 [2024-11-29 03:14:36.816605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:20.852 [2024-11-29 03:14:36.816622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:20.852 [2024-11-29 03:14:36.816633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.852 [2024-11-29 03:14:36.816655] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:20.852 [2024-11-29 03:14:36.816677] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:20.852 [2024-11-29 03:14:36.816721] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:20.852 [2024-11-29 03:14:36.816742] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:20.852 [2024-11-29 03:14:36.816867] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:20.852 [2024-11-29 03:14:36.816879] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:20.852 [2024-11-29 03:14:36.816893] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:20.852 [2024-11-29 03:14:36.816906] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:20.852 [2024-11-29 03:14:36.816916] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:20.852 [2024-11-29 03:14:36.816925] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:20.852 [2024-11-29 03:14:36.816933] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:20.852 [2024-11-29 03:14:36.816942] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:20.852 [2024-11-29 03:14:36.816951] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:20.852 [2024-11-29 03:14:36.816960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.852 [2024-11-29 03:14:36.816972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:20.852 [2024-11-29 03:14:36.816988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:27:20.852 [2024-11-29 03:14:36.816998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.852 [2024-11-29 03:14:36.817083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.852 [2024-11-29 03:14:36.817102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:20.852 [2024-11-29 03:14:36.817112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:20.852 [2024-11-29 03:14:36.817119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.852 [2024-11-29 03:14:36.817226] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:20.852 [2024-11-29 03:14:36.817250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:20.852 [2024-11-29 03:14:36.817265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:20.852 [2024-11-29 03:14:36.817276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.852 [2024-11-29 03:14:36.817286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:20.852 [2024-11-29 03:14:36.817294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:20.852 [2024-11-29 03:14:36.817303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:20.852 [2024-11-29 03:14:36.817315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:20.852 [2024-11-29 03:14:36.817324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:20.852 [2024-11-29 03:14:36.817335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:20.852 [2024-11-29 03:14:36.817344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:20.852 [2024-11-29 03:14:36.817352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:20.852 [2024-11-29 03:14:36.817361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:20.852 [2024-11-29 03:14:36.817369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:20.852 [2024-11-29 03:14:36.817379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:20.852 [2024-11-29 03:14:36.817390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.852 [2024-11-29 03:14:36.817398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:20.852 [2024-11-29 03:14:36.817406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:20.852 [2024-11-29 03:14:36.817416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.852 [2024-11-29 03:14:36.817424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:20.852 [2024-11-29 03:14:36.817432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:20.852 [2024-11-29 03:14:36.817440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.852 [2024-11-29 03:14:36.817448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:20.852 [2024-11-29 03:14:36.817456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:20.852 [2024-11-29 03:14:36.817465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.852 [2024-11-29 03:14:36.817477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:20.852 [2024-11-29 03:14:36.817485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:20.852 [2024-11-29 03:14:36.817492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.852 [2024-11-29 03:14:36.817500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:20.852 [2024-11-29 03:14:36.817508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:20.852 [2024-11-29 03:14:36.817516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.852 [2024-11-29 03:14:36.817524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:20.852 [2024-11-29 03:14:36.817531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:20.852 [2024-11-29 03:14:36.817538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:20.852 [2024-11-29 03:14:36.817546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:20.852 [2024-11-29 03:14:36.817553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:20.852 [2024-11-29 03:14:36.817561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:20.852 [2024-11-29 03:14:36.817569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:20.852 [2024-11-29 03:14:36.817576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:20.852 [2024-11-29 03:14:36.817584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.852 [2024-11-29 03:14:36.817591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:20.852 [2024-11-29 03:14:36.817601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:20.852 [2024-11-29 03:14:36.817610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.852 [2024-11-29 03:14:36.817617] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:20.852 [2024-11-29 03:14:36.817628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:20.852 [2024-11-29 03:14:36.817637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:20.852 [2024-11-29 03:14:36.817646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.852 [2024-11-29 03:14:36.817656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:20.852 [2024-11-29 03:14:36.817665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:20.852 [2024-11-29 03:14:36.817671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:20.852 [2024-11-29 03:14:36.817679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:20.852 [2024-11-29 03:14:36.817687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:20.852 [2024-11-29 03:14:36.817695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:20.852 [2024-11-29 03:14:36.817703] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:20.852 [2024-11-29 03:14:36.817712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:20.852 [2024-11-29 03:14:36.817724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:20.852 [2024-11-29 03:14:36.817731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:20.852 [2024-11-29 03:14:36.817740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:20.852 [2024-11-29 03:14:36.817748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:20.852 [2024-11-29 03:14:36.817756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:20.852 [2024-11-29 03:14:36.817764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:20.852 [2024-11-29 03:14:36.817772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:20.852 [2024-11-29 03:14:36.817779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:20.852 [2024-11-29 03:14:36.817787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:20.852 [2024-11-29 03:14:36.817802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:20.852 [2024-11-29 03:14:36.817809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:20.852 [2024-11-29 03:14:36.817816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:20.852 [2024-11-29 03:14:36.817846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:20.852 [2024-11-29 03:14:36.817856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:20.853 [2024-11-29 03:14:36.817876] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:20.853 [2024-11-29 03:14:36.817884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:20.853 [2024-11-29 03:14:36.817892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:20.853 [2024-11-29 03:14:36.817901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:20.853 [2024-11-29 03:14:36.817911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:20.853 [2024-11-29 03:14:36.817919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:20.853 [2024-11-29 03:14:36.817928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.853 [2024-11-29 03:14:36.817940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:20.853 [2024-11-29 03:14:36.817948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:27:20.853 [2024-11-29 03:14:36.817959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.853 [2024-11-29 03:14:36.831550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.853 [2024-11-29 03:14:36.831596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:20.853 [2024-11-29 03:14:36.831614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.543 ms 00:27:20.853 [2024-11-29 03:14:36.831622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.853 [2024-11-29 03:14:36.831710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.853 [2024-11-29 03:14:36.831721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:20.853 [2024-11-29 03:14:36.831730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:20.853 [2024-11-29 03:14:36.831738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.115 [2024-11-29 03:14:36.852341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.115 [2024-11-29 03:14:36.852409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:21.115 [2024-11-29 03:14:36.852427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.546 ms 00:27:21.115 [2024-11-29 03:14:36.852439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.115 [2024-11-29 03:14:36.852500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.115 [2024-11-29 03:14:36.852515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:21.115 [2024-11-29 03:14:36.852534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:21.115 [2024-11-29 03:14:36.852547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.115 [2024-11-29 03:14:36.853187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.115 [2024-11-29 03:14:36.853238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:21.115 [2024-11-29 03:14:36.853254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:27:21.116 [2024-11-29 03:14:36.853266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.853480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.116 [2024-11-29 03:14:36.853495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:21.116 [2024-11-29 03:14:36.853508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:27:21.116 [2024-11-29 03:14:36.853525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.862229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.116 [2024-11-29 03:14:36.862280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:21.116 [2024-11-29 03:14:36.862294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.675 ms 00:27:21.116 [2024-11-29 03:14:36.862301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.866066] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:21.116 [2024-11-29 03:14:36.866120] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:21.116 [2024-11-29 03:14:36.866136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.116 [2024-11-29 03:14:36.866145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:21.116 [2024-11-29 03:14:36.866155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.746 ms 00:27:21.116 [2024-11-29 03:14:36.866163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.881526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.116 [2024-11-29 03:14:36.881574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:21.116 [2024-11-29 03:14:36.881587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.311 ms 00:27:21.116 [2024-11-29 03:14:36.881596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.884338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.116 [2024-11-29 03:14:36.884383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:21.116 [2024-11-29 03:14:36.884394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.691 ms 00:27:21.116 [2024-11-29 03:14:36.884401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.887070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.116 [2024-11-29 03:14:36.887117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:21.116 [2024-11-29 03:14:36.887137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:27:21.116 [2024-11-29 03:14:36.887145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.887480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.116 [2024-11-29 03:14:36.887494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:21.116 [2024-11-29 03:14:36.887505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:27:21.116 [2024-11-29 03:14:36.887513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.910360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.116 [2024-11-29 03:14:36.910424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:21.116 [2024-11-29 03:14:36.910438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.820 ms 00:27:21.116 [2024-11-29 03:14:36.910447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.918603] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:21.116 [2024-11-29 03:14:36.921711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.116 [2024-11-29 03:14:36.921769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:21.116 [2024-11-29 03:14:36.921782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.209 ms 00:27:21.116 [2024-11-29 03:14:36.921795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.921917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.116 [2024-11-29 03:14:36.921929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:21.116 [2024-11-29 03:14:36.921948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:21.116 [2024-11-29 03:14:36.921957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.922762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.116 [2024-11-29 03:14:36.922814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:21.116 [2024-11-29 03:14:36.922841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:27:21.116 [2024-11-29 03:14:36.922851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.922884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.116 [2024-11-29 03:14:36.922894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:21.116 [2024-11-29 03:14:36.922903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:21.116 [2024-11-29 03:14:36.922910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.922951] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:21.116 [2024-11-29 03:14:36.922964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.116 [2024-11-29 03:14:36.922975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:21.116 [2024-11-29 03:14:36.922991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:21.116 [2024-11-29 03:14:36.923000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.928462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.116 [2024-11-29 03:14:36.928511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:21.116 [2024-11-29 03:14:36.928523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.442 ms 00:27:21.116 [2024-11-29 03:14:36.928541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.928625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.116 [2024-11-29 03:14:36.928635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:21.116 [2024-11-29 03:14:36.928649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:21.116 [2024-11-29 03:14:36.928661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.116 [2024-11-29 03:14:36.930988] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.240 ms, result 0 00:27:22.505  [2024-11-29T03:14:39.440Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-29T03:14:40.381Z] Copying: 31/1024 [MB] (12 MBps) [2024-11-29T03:14:41.316Z] Copying: 42/1024 [MB] (10 MBps) [2024-11-29T03:14:42.260Z] Copying: 74/1024 [MB] (31 MBps) [2024-11-29T03:14:43.206Z] Copying: 95/1024 [MB] (21 MBps) [2024-11-29T03:14:44.153Z] Copying: 115/1024 [MB] (19 MBps) [2024-11-29T03:14:45.540Z] Copying: 133/1024 [MB] (18 MBps) [2024-11-29T03:14:46.112Z] Copying: 146/1024 [MB] (13 MBps) [2024-11-29T03:14:47.498Z] Copying: 164/1024 [MB] (17 MBps) [2024-11-29T03:14:48.442Z] Copying: 178/1024 [MB] (13 MBps) [2024-11-29T03:14:49.386Z] Copying: 198/1024 [MB] (19 MBps) [2024-11-29T03:14:50.332Z] Copying: 208/1024 [MB] (10 MBps) [2024-11-29T03:14:51.278Z] Copying: 232/1024 [MB] (23 MBps) [2024-11-29T03:14:52.217Z] Copying: 243/1024 [MB] (10 MBps) [2024-11-29T03:14:53.159Z] Copying: 258/1024 [MB] (15 MBps) [2024-11-29T03:14:54.112Z] Copying: 275/1024 [MB] (17 MBps) [2024-11-29T03:14:55.508Z] Copying: 291/1024 [MB] (15 MBps) [2024-11-29T03:14:56.450Z] Copying: 304/1024 [MB] (12 MBps) [2024-11-29T03:14:57.393Z] Copying: 330/1024 [MB] (26 MBps) [2024-11-29T03:14:58.338Z] Copying: 352/1024 [MB] (21 MBps) [2024-11-29T03:14:59.283Z] Copying: 366/1024 [MB] (14 MBps) [2024-11-29T03:15:00.223Z] Copying: 384/1024 [MB] (17 MBps) [2024-11-29T03:15:01.163Z] Copying: 398/1024 [MB] (14 MBps) [2024-11-29T03:15:02.546Z] Copying: 409/1024 [MB] (10 MBps) [2024-11-29T03:15:03.120Z] Copying: 419/1024 [MB] (10 MBps) [2024-11-29T03:15:04.510Z] Copying: 430/1024 [MB] (10 MBps) [2024-11-29T03:15:05.454Z] Copying: 442/1024 [MB] (12 MBps) [2024-11-29T03:15:06.393Z] Copying: 452/1024 [MB] (10 MBps) [2024-11-29T03:15:07.336Z] Copying: 463/1024 [MB] (10 MBps) [2024-11-29T03:15:08.280Z] Copying: 485/1024 [MB] (21 MBps) [2024-11-29T03:15:09.225Z] Copying: 509/1024 [MB] (24 MBps) [2024-11-29T03:15:10.171Z] Copying: 530/1024 [MB] (21 MBps) [2024-11-29T03:15:11.142Z] Copying: 552/1024 [MB] (21 MBps) [2024-11-29T03:15:12.533Z] Copying: 574/1024 [MB] (21 MBps) [2024-11-29T03:15:13.476Z] Copying: 594/1024 [MB] (20 MBps) [2024-11-29T03:15:14.421Z] Copying: 613/1024 [MB] (18 MBps) [2024-11-29T03:15:15.365Z] Copying: 636/1024 [MB] (23 MBps) [2024-11-29T03:15:16.305Z] Copying: 656/1024 [MB] (19 MBps) [2024-11-29T03:15:17.242Z] Copying: 677/1024 [MB] (21 MBps) [2024-11-29T03:15:18.183Z] Copying: 706/1024 [MB] (29 MBps) [2024-11-29T03:15:19.129Z] Copying: 726/1024 [MB] (19 MBps) [2024-11-29T03:15:20.518Z] Copying: 739/1024 [MB] (13 MBps) [2024-11-29T03:15:21.463Z] Copying: 750/1024 [MB] (10 MBps) [2024-11-29T03:15:22.407Z] Copying: 762/1024 [MB] (12 MBps) [2024-11-29T03:15:23.401Z] Copying: 774/1024 [MB] (11 MBps) [2024-11-29T03:15:24.345Z] Copying: 791/1024 [MB] (17 MBps) [2024-11-29T03:15:25.290Z] Copying: 807/1024 [MB] (15 MBps) [2024-11-29T03:15:26.233Z] Copying: 817/1024 [MB] (10 MBps) [2024-11-29T03:15:27.177Z] Copying: 828/1024 [MB] (11 MBps) [2024-11-29T03:15:28.120Z] Copying: 839/1024 [MB] (10 MBps) [2024-11-29T03:15:29.508Z] Copying: 850/1024 [MB] (10 MBps) [2024-11-29T03:15:30.450Z] Copying: 860/1024 [MB] (10 MBps) [2024-11-29T03:15:31.395Z] Copying: 876/1024 [MB] (16 MBps) [2024-11-29T03:15:32.338Z] Copying: 889/1024 [MB] (12 MBps) [2024-11-29T03:15:33.283Z] Copying: 904/1024 [MB] (15 MBps) [2024-11-29T03:15:34.228Z] Copying: 919/1024 [MB] (14 MBps) [2024-11-29T03:15:35.171Z] Copying: 934/1024 [MB] (14 MBps) [2024-11-29T03:15:36.114Z] Copying: 945/1024 [MB] (11 MBps) [2024-11-29T03:15:37.504Z] Copying: 959/1024 [MB] (13 MBps) [2024-11-29T03:15:38.452Z] Copying: 969/1024 [MB] (10 MBps) [2024-11-29T03:15:39.399Z] Copying: 984/1024 [MB] (14 MBps) [2024-11-29T03:15:40.344Z] Copying: 994/1024 [MB] (10 MBps) [2024-11-29T03:15:40.606Z] Copying: 1015/1024 [MB] (20 MBps) [2024-11-29T03:15:40.606Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-29 03:15:40.466732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.614 [2024-11-29 03:15:40.466814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:24.614 [2024-11-29 03:15:40.466855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:24.614 [2024-11-29 03:15:40.466865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.614 [2024-11-29 03:15:40.466889] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:24.614 [2024-11-29 03:15:40.467908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.614 [2024-11-29 03:15:40.467948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:24.614 [2024-11-29 03:15:40.467965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:28:24.614 [2024-11-29 03:15:40.467974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.614 [2024-11-29 03:15:40.468199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.614 [2024-11-29 03:15:40.468228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:24.614 [2024-11-29 03:15:40.468239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:28:24.614 [2024-11-29 03:15:40.468252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.614 [2024-11-29 03:15:40.472178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.614 [2024-11-29 03:15:40.472202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:24.614 [2024-11-29 03:15:40.472217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.912 ms 00:28:24.614 [2024-11-29 03:15:40.472229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.615 [2024-11-29 03:15:40.478636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.615 [2024-11-29 03:15:40.478679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:24.615 [2024-11-29 03:15:40.478690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.388 ms 00:28:24.615 [2024-11-29 03:15:40.478698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.615 [2024-11-29 03:15:40.481541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.615 [2024-11-29 03:15:40.481596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:24.615 [2024-11-29 03:15:40.481607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.772 ms 00:28:24.615 [2024-11-29 03:15:40.481615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.615 [2024-11-29 03:15:40.486031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.615 [2024-11-29 03:15:40.486086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:24.615 [2024-11-29 03:15:40.486098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.368 ms 00:28:24.615 [2024-11-29 03:15:40.486106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.615 [2024-11-29 03:15:40.490550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.615 [2024-11-29 03:15:40.490600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:24.615 [2024-11-29 03:15:40.490623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.394 ms 00:28:24.615 [2024-11-29 03:15:40.490637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.615 [2024-11-29 03:15:40.493780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.615 [2024-11-29 03:15:40.494017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:24.615 [2024-11-29 03:15:40.494037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.126 ms 00:28:24.615 [2024-11-29 03:15:40.494045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.615 [2024-11-29 03:15:40.496620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.615 [2024-11-29 03:15:40.496665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:24.615 [2024-11-29 03:15:40.496674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.534 ms 00:28:24.615 [2024-11-29 03:15:40.496681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.615 [2024-11-29 03:15:40.499153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.615 [2024-11-29 03:15:40.499204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:24.615 [2024-11-29 03:15:40.499215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.429 ms 00:28:24.615 [2024-11-29 03:15:40.499223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.615 [2024-11-29 03:15:40.501377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.615 [2024-11-29 03:15:40.501425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:24.615 [2024-11-29 03:15:40.501436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.080 ms 00:28:24.615 [2024-11-29 03:15:40.501443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.615 [2024-11-29 03:15:40.501483] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:24.615 [2024-11-29 03:15:40.501500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:24.615 [2024-11-29 03:15:40.501510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:24.615 [2024-11-29 03:15:40.501518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.501997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.502005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.502013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:24.615 [2024-11-29 03:15:40.502020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:24.616 [2024-11-29 03:15:40.502349] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:24.616 [2024-11-29 03:15:40.502366] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4aeceb6-49df-40de-9f15-247c3b8e06b1 00:28:24.616 [2024-11-29 03:15:40.502378] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:24.616 [2024-11-29 03:15:40.502386] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:24.616 [2024-11-29 03:15:40.502393] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:24.616 [2024-11-29 03:15:40.502402] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:24.616 [2024-11-29 03:15:40.502409] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:24.616 [2024-11-29 03:15:40.502418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:24.616 [2024-11-29 03:15:40.502429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:24.616 [2024-11-29 03:15:40.502436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:24.616 [2024-11-29 03:15:40.502443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:24.616 [2024-11-29 03:15:40.502450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.616 [2024-11-29 03:15:40.502465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:24.616 [2024-11-29 03:15:40.502475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:28:24.616 [2024-11-29 03:15:40.502483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.616 [2024-11-29 03:15:40.504860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.616 [2024-11-29 03:15:40.504890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:24.616 [2024-11-29 03:15:40.504901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.353 ms 00:28:24.616 [2024-11-29 03:15:40.504910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.616 [2024-11-29 03:15:40.505034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.616 [2024-11-29 03:15:40.505049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:24.616 [2024-11-29 03:15:40.505059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:28:24.616 [2024-11-29 03:15:40.505067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.616 [2024-11-29 03:15:40.512752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.616 [2024-11-29 03:15:40.512977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:24.616 [2024-11-29 03:15:40.512998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.616 [2024-11-29 03:15:40.513014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.616 [2024-11-29 03:15:40.513078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.616 [2024-11-29 03:15:40.513086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:24.616 [2024-11-29 03:15:40.513095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.616 [2024-11-29 03:15:40.513108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.616 [2024-11-29 03:15:40.513191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.616 [2024-11-29 03:15:40.513202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:24.616 [2024-11-29 03:15:40.513210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.616 [2024-11-29 03:15:40.513218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.616 [2024-11-29 03:15:40.513237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.616 [2024-11-29 03:15:40.513249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:24.616 [2024-11-29 03:15:40.513257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.616 [2024-11-29 03:15:40.513264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.616 [2024-11-29 03:15:40.527022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.616 [2024-11-29 03:15:40.527074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:24.616 [2024-11-29 03:15:40.527095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.616 [2024-11-29 03:15:40.527107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.616 [2024-11-29 03:15:40.537162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.616 [2024-11-29 03:15:40.537217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:24.616 [2024-11-29 03:15:40.537228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.616 [2024-11-29 03:15:40.537237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.616 [2024-11-29 03:15:40.537292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.616 [2024-11-29 03:15:40.537302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:24.616 [2024-11-29 03:15:40.537311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.616 [2024-11-29 03:15:40.537319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.616 [2024-11-29 03:15:40.537358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.616 [2024-11-29 03:15:40.537368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:24.616 [2024-11-29 03:15:40.537376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.616 [2024-11-29 03:15:40.537389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.616 [2024-11-29 03:15:40.537457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.616 [2024-11-29 03:15:40.537467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:24.616 [2024-11-29 03:15:40.537475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.616 [2024-11-29 03:15:40.537483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.616 [2024-11-29 03:15:40.537518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.616 [2024-11-29 03:15:40.537532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:24.616 [2024-11-29 03:15:40.537540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.616 [2024-11-29 03:15:40.537548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.616 [2024-11-29 03:15:40.537589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.617 [2024-11-29 03:15:40.537598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:24.617 [2024-11-29 03:15:40.537607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.617 [2024-11-29 03:15:40.537615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.617 [2024-11-29 03:15:40.537664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.617 [2024-11-29 03:15:40.537677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:24.617 [2024-11-29 03:15:40.537685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.617 [2024-11-29 03:15:40.537704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.617 [2024-11-29 03:15:40.537861] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.067 ms, result 0 00:28:24.878 00:28:24.878 00:28:24.878 03:15:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:27.426 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:27.426 03:15:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:27.426 03:15:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:27.426 03:15:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:27.426 03:15:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:27.426 03:15:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:27.426 03:15:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:27.426 03:15:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:27.426 03:15:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 91051 00:28:27.426 03:15:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91051 ']' 00:28:27.426 03:15:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 91051 00:28:27.426 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (91051) - No such process 00:28:27.426 Process with pid 91051 is not found 00:28:27.426 03:15:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 91051 is not found' 00:28:27.426 03:15:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:27.688 Remove shared memory files 00:28:27.688 03:15:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:27.688 03:15:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:27.688 03:15:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:27.688 03:15:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:27.688 03:15:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:27.688 03:15:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:27.688 03:15:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:27.688 ************************************ 00:28:27.688 END TEST ftl_dirty_shutdown 00:28:27.688 ************************************ 00:28:27.688 00:28:27.688 real 4m15.844s 00:28:27.688 user 4m38.908s 00:28:27.688 sys 0m26.622s 00:28:27.688 03:15:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:27.688 03:15:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:27.688 03:15:43 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:27.688 03:15:43 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:27.688 03:15:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:27.688 03:15:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:27.688 ************************************ 00:28:27.688 START TEST ftl_upgrade_shutdown 00:28:27.688 ************************************ 00:28:27.688 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:27.688 * Looking for test storage... 00:28:27.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:27.688 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:27.688 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:27.688 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:27.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.949 --rc genhtml_branch_coverage=1 00:28:27.949 --rc genhtml_function_coverage=1 00:28:27.949 --rc genhtml_legend=1 00:28:27.949 --rc geninfo_all_blocks=1 00:28:27.949 --rc geninfo_unexecuted_blocks=1 00:28:27.949 00:28:27.949 ' 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:27.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.949 --rc genhtml_branch_coverage=1 00:28:27.949 --rc genhtml_function_coverage=1 00:28:27.949 --rc genhtml_legend=1 00:28:27.949 --rc geninfo_all_blocks=1 00:28:27.949 --rc geninfo_unexecuted_blocks=1 00:28:27.949 00:28:27.949 ' 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:27.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.949 --rc genhtml_branch_coverage=1 00:28:27.949 --rc genhtml_function_coverage=1 00:28:27.949 --rc genhtml_legend=1 00:28:27.949 --rc geninfo_all_blocks=1 00:28:27.949 --rc geninfo_unexecuted_blocks=1 00:28:27.949 00:28:27.949 ' 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:27.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.949 --rc genhtml_branch_coverage=1 00:28:27.949 --rc genhtml_function_coverage=1 00:28:27.949 --rc genhtml_legend=1 00:28:27.949 --rc geninfo_all_blocks=1 00:28:27.949 --rc geninfo_unexecuted_blocks=1 00:28:27.949 00:28:27.949 ' 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:27.949 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93800 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93800 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93800 ']' 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.950 03:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:27.950 [2024-11-29 03:15:43.819475] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:27.950 [2024-11-29 03:15:43.819624] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93800 ] 00:28:28.211 [2024-11-29 03:15:43.965449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.211 [2024-11-29 03:15:43.994354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:28.785 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:29.046 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:29.046 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:29.046 03:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:29.046 03:15:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:28:29.046 03:15:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:29.046 03:15:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:29.046 03:15:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:29.046 03:15:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:29.306 03:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:29.306 { 00:28:29.306 "name": "basen1", 00:28:29.306 "aliases": [ 00:28:29.306 "a3b17d79-fbfe-4e57-a268-aac873f96a7a" 00:28:29.306 ], 00:28:29.306 "product_name": "NVMe disk", 00:28:29.306 "block_size": 4096, 00:28:29.306 "num_blocks": 1310720, 00:28:29.306 "uuid": "a3b17d79-fbfe-4e57-a268-aac873f96a7a", 00:28:29.306 "numa_id": -1, 00:28:29.306 "assigned_rate_limits": { 00:28:29.306 "rw_ios_per_sec": 0, 00:28:29.306 "rw_mbytes_per_sec": 0, 00:28:29.306 "r_mbytes_per_sec": 0, 00:28:29.306 "w_mbytes_per_sec": 0 00:28:29.306 }, 00:28:29.306 "claimed": true, 00:28:29.306 "claim_type": "read_many_write_one", 00:28:29.306 "zoned": false, 00:28:29.306 "supported_io_types": { 00:28:29.306 "read": true, 00:28:29.306 "write": true, 00:28:29.306 "unmap": true, 00:28:29.306 "flush": true, 00:28:29.306 "reset": true, 00:28:29.306 "nvme_admin": true, 00:28:29.306 "nvme_io": true, 00:28:29.306 "nvme_io_md": false, 00:28:29.306 "write_zeroes": true, 00:28:29.306 "zcopy": false, 00:28:29.306 "get_zone_info": false, 00:28:29.306 "zone_management": false, 00:28:29.306 "zone_append": false, 00:28:29.306 "compare": true, 00:28:29.306 "compare_and_write": false, 00:28:29.306 "abort": true, 00:28:29.306 "seek_hole": false, 00:28:29.306 "seek_data": false, 00:28:29.306 "copy": true, 00:28:29.306 "nvme_iov_md": false 00:28:29.306 }, 00:28:29.306 "driver_specific": { 00:28:29.306 "nvme": [ 00:28:29.306 { 00:28:29.306 "pci_address": "0000:00:11.0", 00:28:29.306 "trid": { 00:28:29.306 "trtype": "PCIe", 00:28:29.306 "traddr": "0000:00:11.0" 00:28:29.306 }, 00:28:29.306 "ctrlr_data": { 00:28:29.306 "cntlid": 0, 00:28:29.306 "vendor_id": "0x1b36", 00:28:29.306 "model_number": "QEMU NVMe Ctrl", 00:28:29.306 "serial_number": "12341", 00:28:29.306 "firmware_revision": "8.0.0", 00:28:29.306 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:29.306 "oacs": { 00:28:29.306 "security": 0, 00:28:29.306 "format": 1, 00:28:29.306 "firmware": 0, 00:28:29.306 "ns_manage": 1 00:28:29.306 }, 00:28:29.306 "multi_ctrlr": false, 00:28:29.306 "ana_reporting": false 00:28:29.306 }, 00:28:29.306 "vs": { 00:28:29.306 "nvme_version": "1.4" 00:28:29.306 }, 00:28:29.306 "ns_data": { 00:28:29.306 "id": 1, 00:28:29.306 "can_share": false 00:28:29.306 } 00:28:29.306 } 00:28:29.306 ], 00:28:29.306 "mp_policy": "active_passive" 00:28:29.306 } 00:28:29.306 } 00:28:29.306 ]' 00:28:29.306 03:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:29.307 03:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:29.307 03:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:29.307 03:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:29.307 03:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:29.307 03:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:29.307 03:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:29.307 03:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:29.307 03:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:29.307 03:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:29.307 03:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:29.568 03:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=379a72df-c722-4951-9168-45a9349d5d5c 00:28:29.568 03:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:29.568 03:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 379a72df-c722-4951-9168-45a9349d5d5c 00:28:29.828 03:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:30.090 03:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=43b0d910-b2b1-4230-9902-a0d6d7b3de83 00:28:30.090 03:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 43b0d910-b2b1-4230-9902-a0d6d7b3de83 00:28:30.351 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=8a909a22-2bf0-4c76-9f59-51a5463eef3b 00:28:30.351 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 8a909a22-2bf0-4c76-9f59-51a5463eef3b ]] 00:28:30.351 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 8a909a22-2bf0-4c76-9f59-51a5463eef3b 5120 00:28:30.351 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:30.351 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:30.351 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=8a909a22-2bf0-4c76-9f59-51a5463eef3b 00:28:30.351 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:30.351 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 8a909a22-2bf0-4c76-9f59-51a5463eef3b 00:28:30.351 03:15:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=8a909a22-2bf0-4c76-9f59-51a5463eef3b 00:28:30.351 03:15:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:30.351 03:15:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:30.351 03:15:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:30.351 03:15:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a909a22-2bf0-4c76-9f59-51a5463eef3b 00:28:30.612 03:15:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:30.612 { 00:28:30.612 "name": "8a909a22-2bf0-4c76-9f59-51a5463eef3b", 00:28:30.612 "aliases": [ 00:28:30.612 "lvs/basen1p0" 00:28:30.612 ], 00:28:30.612 "product_name": "Logical Volume", 00:28:30.612 "block_size": 4096, 00:28:30.612 "num_blocks": 5242880, 00:28:30.612 "uuid": "8a909a22-2bf0-4c76-9f59-51a5463eef3b", 00:28:30.612 "assigned_rate_limits": { 00:28:30.612 "rw_ios_per_sec": 0, 00:28:30.612 "rw_mbytes_per_sec": 0, 00:28:30.612 "r_mbytes_per_sec": 0, 00:28:30.612 "w_mbytes_per_sec": 0 00:28:30.612 }, 00:28:30.612 "claimed": false, 00:28:30.612 "zoned": false, 00:28:30.612 "supported_io_types": { 00:28:30.612 "read": true, 00:28:30.612 "write": true, 00:28:30.612 "unmap": true, 00:28:30.612 "flush": false, 00:28:30.612 "reset": true, 00:28:30.612 "nvme_admin": false, 00:28:30.612 "nvme_io": false, 00:28:30.612 "nvme_io_md": false, 00:28:30.612 "write_zeroes": true, 00:28:30.612 "zcopy": false, 00:28:30.612 "get_zone_info": false, 00:28:30.612 "zone_management": false, 00:28:30.612 "zone_append": false, 00:28:30.612 "compare": false, 00:28:30.612 "compare_and_write": false, 00:28:30.612 "abort": false, 00:28:30.612 "seek_hole": true, 00:28:30.612 "seek_data": true, 00:28:30.612 "copy": false, 00:28:30.612 "nvme_iov_md": false 00:28:30.612 }, 00:28:30.612 "driver_specific": { 00:28:30.612 "lvol": { 00:28:30.612 "lvol_store_uuid": "43b0d910-b2b1-4230-9902-a0d6d7b3de83", 00:28:30.612 "base_bdev": "basen1", 00:28:30.612 "thin_provision": true, 00:28:30.612 "num_allocated_clusters": 0, 00:28:30.612 "snapshot": false, 00:28:30.612 "clone": false, 00:28:30.612 "esnap_clone": false 00:28:30.612 } 00:28:30.612 } 00:28:30.612 } 00:28:30.612 ]' 00:28:30.612 03:15:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:30.612 03:15:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:30.612 03:15:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:30.612 03:15:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:28:30.612 03:15:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:28:30.612 03:15:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:28:30.612 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:30.613 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:30.613 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:30.874 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:30.874 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:30.874 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:31.135 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:31.135 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:31.135 03:15:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 8a909a22-2bf0-4c76-9f59-51a5463eef3b -c cachen1p0 --l2p_dram_limit 2 00:28:31.135 [2024-11-29 03:15:47.084012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.135 [2024-11-29 03:15:47.084060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:31.135 [2024-11-29 03:15:47.084074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:31.135 [2024-11-29 03:15:47.084083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.135 [2024-11-29 03:15:47.084132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.135 [2024-11-29 03:15:47.084145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:31.135 [2024-11-29 03:15:47.084153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:28:31.135 [2024-11-29 03:15:47.084167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.135 [2024-11-29 03:15:47.084186] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:31.135 [2024-11-29 03:15:47.084418] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:31.135 [2024-11-29 03:15:47.084432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.135 [2024-11-29 03:15:47.084444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:31.135 [2024-11-29 03:15:47.084452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:28:31.135 [2024-11-29 03:15:47.084461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.136 [2024-11-29 03:15:47.084490] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID ad7a6e26-5f99-4564-b944-47558ba162fc 00:28:31.136 [2024-11-29 03:15:47.085550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.136 [2024-11-29 03:15:47.085579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:31.136 [2024-11-29 03:15:47.085596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:31.136 [2024-11-29 03:15:47.085604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.136 [2024-11-29 03:15:47.090775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.136 [2024-11-29 03:15:47.090806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:31.136 [2024-11-29 03:15:47.090818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.100 ms 00:28:31.136 [2024-11-29 03:15:47.090835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.136 [2024-11-29 03:15:47.090880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.136 [2024-11-29 03:15:47.090889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:31.136 [2024-11-29 03:15:47.090898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:31.136 [2024-11-29 03:15:47.090905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.136 [2024-11-29 03:15:47.090953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.136 [2024-11-29 03:15:47.090966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:31.136 [2024-11-29 03:15:47.090975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:31.136 [2024-11-29 03:15:47.090982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.136 [2024-11-29 03:15:47.091007] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:31.136 [2024-11-29 03:15:47.092552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.136 [2024-11-29 03:15:47.092643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:31.136 [2024-11-29 03:15:47.092695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.554 ms 00:28:31.136 [2024-11-29 03:15:47.092720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.136 [2024-11-29 03:15:47.092762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.136 [2024-11-29 03:15:47.092785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:31.136 [2024-11-29 03:15:47.092805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:31.136 [2024-11-29 03:15:47.092842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.136 [2024-11-29 03:15:47.092946] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:31.136 [2024-11-29 03:15:47.093104] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:31.136 [2024-11-29 03:15:47.093175] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:31.136 [2024-11-29 03:15:47.093210] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:31.136 [2024-11-29 03:15:47.093242] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:31.136 [2024-11-29 03:15:47.093277] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:31.136 [2024-11-29 03:15:47.093341] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:31.136 [2024-11-29 03:15:47.093371] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:31.136 [2024-11-29 03:15:47.093394] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:31.136 [2024-11-29 03:15:47.093451] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:31.136 [2024-11-29 03:15:47.093470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.136 [2024-11-29 03:15:47.093546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:31.136 [2024-11-29 03:15:47.093573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.525 ms 00:28:31.136 [2024-11-29 03:15:47.093594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.136 [2024-11-29 03:15:47.093692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.136 [2024-11-29 03:15:47.093779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:31.136 [2024-11-29 03:15:47.093802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:28:31.136 [2024-11-29 03:15:47.093859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.136 [2024-11-29 03:15:47.093982] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:31.136 [2024-11-29 03:15:47.094008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:31.136 [2024-11-29 03:15:47.094056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:31.136 [2024-11-29 03:15:47.094081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:31.136 [2024-11-29 03:15:47.094101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:31.136 [2024-11-29 03:15:47.094122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:31.136 [2024-11-29 03:15:47.094141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:31.136 [2024-11-29 03:15:47.094161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:31.136 [2024-11-29 03:15:47.094212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:31.136 [2024-11-29 03:15:47.094235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:31.136 [2024-11-29 03:15:47.094254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:31.136 [2024-11-29 03:15:47.094297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:31.136 [2024-11-29 03:15:47.094315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:31.136 [2024-11-29 03:15:47.094419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:31.136 [2024-11-29 03:15:47.094430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:31.136 [2024-11-29 03:15:47.094438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:31.136 [2024-11-29 03:15:47.094445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:31.136 [2024-11-29 03:15:47.094453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:31.136 [2024-11-29 03:15:47.094459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:31.136 [2024-11-29 03:15:47.094468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:31.136 [2024-11-29 03:15:47.094474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:31.136 [2024-11-29 03:15:47.094482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:31.136 [2024-11-29 03:15:47.094488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:31.136 [2024-11-29 03:15:47.094496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:31.136 [2024-11-29 03:15:47.094502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:31.136 [2024-11-29 03:15:47.094510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:31.136 [2024-11-29 03:15:47.094517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:31.136 [2024-11-29 03:15:47.094525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:31.136 [2024-11-29 03:15:47.094532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:31.136 [2024-11-29 03:15:47.094542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:31.136 [2024-11-29 03:15:47.094548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:31.136 [2024-11-29 03:15:47.094558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:31.136 [2024-11-29 03:15:47.094564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:31.136 [2024-11-29 03:15:47.094572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:31.136 [2024-11-29 03:15:47.094579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:31.136 [2024-11-29 03:15:47.094587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:31.136 [2024-11-29 03:15:47.094593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:31.136 [2024-11-29 03:15:47.094601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:31.136 [2024-11-29 03:15:47.094607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:31.136 [2024-11-29 03:15:47.094615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:31.136 [2024-11-29 03:15:47.094621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:31.136 [2024-11-29 03:15:47.094629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:31.136 [2024-11-29 03:15:47.094636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:31.136 [2024-11-29 03:15:47.094643] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:31.136 [2024-11-29 03:15:47.094650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:31.136 [2024-11-29 03:15:47.094660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:31.136 [2024-11-29 03:15:47.094668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:31.136 [2024-11-29 03:15:47.094682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:31.136 [2024-11-29 03:15:47.094693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:31.136 [2024-11-29 03:15:47.094701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:31.136 [2024-11-29 03:15:47.094708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:31.136 [2024-11-29 03:15:47.094716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:31.136 [2024-11-29 03:15:47.094722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:31.136 [2024-11-29 03:15:47.094735] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:31.136 [2024-11-29 03:15:47.094745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:31.136 [2024-11-29 03:15:47.094755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:31.136 [2024-11-29 03:15:47.094762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:31.136 [2024-11-29 03:15:47.094771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:31.136 [2024-11-29 03:15:47.094777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:31.136 [2024-11-29 03:15:47.094786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:31.137 [2024-11-29 03:15:47.094794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:31.137 [2024-11-29 03:15:47.094805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:31.137 [2024-11-29 03:15:47.094812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:31.137 [2024-11-29 03:15:47.094821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:31.137 [2024-11-29 03:15:47.094844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:31.137 [2024-11-29 03:15:47.094854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:31.137 [2024-11-29 03:15:47.094860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:31.137 [2024-11-29 03:15:47.094869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:31.137 [2024-11-29 03:15:47.094876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:31.137 [2024-11-29 03:15:47.094885] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:31.137 [2024-11-29 03:15:47.094893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:31.137 [2024-11-29 03:15:47.094902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:31.137 [2024-11-29 03:15:47.094909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:31.137 [2024-11-29 03:15:47.094918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:31.137 [2024-11-29 03:15:47.094925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:31.137 [2024-11-29 03:15:47.094935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.137 [2024-11-29 03:15:47.094946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:31.137 [2024-11-29 03:15:47.094958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.018 ms 00:28:31.137 [2024-11-29 03:15:47.094965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.137 [2024-11-29 03:15:47.095022] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:31.137 [2024-11-29 03:15:47.095033] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:35.344 [2024-11-29 03:15:51.128336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.344 [2024-11-29 03:15:51.128608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:35.344 [2024-11-29 03:15:51.128773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4033.293 ms 00:28:35.344 [2024-11-29 03:15:51.128809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.344 [2024-11-29 03:15:51.141950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.344 [2024-11-29 03:15:51.142142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:35.344 [2024-11-29 03:15:51.142237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.977 ms 00:28:35.344 [2024-11-29 03:15:51.142268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.344 [2024-11-29 03:15:51.142363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.344 [2024-11-29 03:15:51.142468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:35.344 [2024-11-29 03:15:51.142502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:35.345 [2024-11-29 03:15:51.142522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.155178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.155354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:35.345 [2024-11-29 03:15:51.155511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.503 ms 00:28:35.345 [2024-11-29 03:15:51.155546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.155598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.155620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:35.345 [2024-11-29 03:15:51.155645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:35.345 [2024-11-29 03:15:51.155664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.156271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.156437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:35.345 [2024-11-29 03:15:51.156542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:28:35.345 [2024-11-29 03:15:51.156573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.156645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.156667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:35.345 [2024-11-29 03:15:51.156690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:35.345 [2024-11-29 03:15:51.156709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.165018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.165190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:35.345 [2024-11-29 03:15:51.165334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.272 ms 00:28:35.345 [2024-11-29 03:15:51.165364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.185562] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:35.345 [2024-11-29 03:15:51.187012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.187176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:35.345 [2024-11-29 03:15:51.187633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.552 ms 00:28:35.345 [2024-11-29 03:15:51.187693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.206714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.206878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:35.345 [2024-11-29 03:15:51.206914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.898 ms 00:28:35.345 [2024-11-29 03:15:51.206948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.207161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.207196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:35.345 [2024-11-29 03:15:51.207220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.112 ms 00:28:35.345 [2024-11-29 03:15:51.207245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.213157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.213216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:35.345 [2024-11-29 03:15:51.213232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.863 ms 00:28:35.345 [2024-11-29 03:15:51.213243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.218399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.218455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:35.345 [2024-11-29 03:15:51.218466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.106 ms 00:28:35.345 [2024-11-29 03:15:51.218476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.218821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.218859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:35.345 [2024-11-29 03:15:51.218871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.298 ms 00:28:35.345 [2024-11-29 03:15:51.218883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.260069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.260270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:35.345 [2024-11-29 03:15:51.260294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.162 ms 00:28:35.345 [2024-11-29 03:15:51.260305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.267148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.267321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:35.345 [2024-11-29 03:15:51.267340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.786 ms 00:28:35.345 [2024-11-29 03:15:51.267357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.273192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.273243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:35.345 [2024-11-29 03:15:51.273253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.792 ms 00:28:35.345 [2024-11-29 03:15:51.273263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.279009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.279062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:35.345 [2024-11-29 03:15:51.279073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.702 ms 00:28:35.345 [2024-11-29 03:15:51.279086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.279136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.279148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:35.345 [2024-11-29 03:15:51.279157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:35.345 [2024-11-29 03:15:51.279167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.279241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.345 [2024-11-29 03:15:51.279253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:35.345 [2024-11-29 03:15:51.279261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:28:35.345 [2024-11-29 03:15:51.279274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.345 [2024-11-29 03:15:51.280361] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4195.877 ms, result 0 00:28:35.345 { 00:28:35.345 "name": "ftl", 00:28:35.345 "uuid": "ad7a6e26-5f99-4564-b944-47558ba162fc" 00:28:35.345 } 00:28:35.345 03:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:35.605 [2024-11-29 03:15:51.494570] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.605 03:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:35.865 03:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:36.126 [2024-11-29 03:15:51.915071] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:36.126 03:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:36.386 [2024-11-29 03:15:52.119553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:36.386 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:36.685 Fill FTL, iteration 1 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=93923 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 93923 /var/tmp/spdk.tgt.sock 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93923 ']' 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:36.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.685 03:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:36.685 [2024-11-29 03:15:52.573524] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:36.685 [2024-11-29 03:15:52.573993] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93923 ] 00:28:36.968 [2024-11-29 03:15:52.717252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.968 [2024-11-29 03:15:52.746098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.541 03:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.541 03:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:37.541 03:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:37.803 ftln1 00:28:37.803 03:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:37.803 03:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:38.066 03:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:38.066 03:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 93923 00:28:38.066 03:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93923 ']' 00:28:38.066 03:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93923 00:28:38.066 03:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:38.066 03:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:38.066 03:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93923 00:28:38.066 03:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:38.066 killing process with pid 93923 00:28:38.066 03:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:38.066 03:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93923' 00:28:38.066 03:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93923 00:28:38.066 03:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93923 00:28:38.328 03:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:38.328 03:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:38.590 [2024-11-29 03:15:54.329166] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:38.590 [2024-11-29 03:15:54.329329] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93954 ] 00:28:38.590 [2024-11-29 03:15:54.476554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.590 [2024-11-29 03:15:54.505370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.978  [2024-11-29T03:15:56.907Z] Copying: 175/1024 [MB] (175 MBps) [2024-11-29T03:15:57.842Z] Copying: 406/1024 [MB] (231 MBps) [2024-11-29T03:15:58.778Z] Copying: 664/1024 [MB] (258 MBps) [2024-11-29T03:15:59.347Z] Copying: 913/1024 [MB] (249 MBps) [2024-11-29T03:15:59.347Z] Copying: 1024/1024 [MB] (average 230 MBps) 00:28:43.355 00:28:43.355 03:15:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:43.355 Calculate MD5 checksum, iteration 1 00:28:43.355 03:15:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:43.355 03:15:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:43.355 03:15:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:43.355 03:15:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:43.355 03:15:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:43.355 03:15:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:43.355 03:15:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:43.615 [2024-11-29 03:15:59.383113] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:43.615 [2024-11-29 03:15:59.383361] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94010 ] 00:28:43.615 [2024-11-29 03:15:59.524138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.615 [2024-11-29 03:15:59.546298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.989  [2024-11-29T03:16:01.554Z] Copying: 665/1024 [MB] (665 MBps) [2024-11-29T03:16:01.815Z] Copying: 1024/1024 [MB] (average 599 MBps) 00:28:45.823 00:28:45.823 03:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:45.823 03:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:47.737 03:16:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:47.737 03:16:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=75c1f5c7a17aca89aa0e14936e4b6744 00:28:47.737 03:16:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:47.737 03:16:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:47.737 03:16:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:47.737 Fill FTL, iteration 2 00:28:47.737 03:16:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:47.737 03:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:47.737 03:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:47.737 03:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:47.737 03:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:47.737 03:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:47.999 [2024-11-29 03:16:03.732314] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:47.999 [2024-11-29 03:16:03.732413] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94059 ] 00:28:47.999 [2024-11-29 03:16:03.876623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.999 [2024-11-29 03:16:03.896445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.388  [2024-11-29T03:16:06.315Z] Copying: 185/1024 [MB] (185 MBps) [2024-11-29T03:16:07.248Z] Copying: 431/1024 [MB] (246 MBps) [2024-11-29T03:16:08.181Z] Copying: 695/1024 [MB] (264 MBps) [2024-11-29T03:16:08.439Z] Copying: 962/1024 [MB] (267 MBps) [2024-11-29T03:16:08.698Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:28:52.706 00:28:52.706 03:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:52.706 03:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:52.706 Calculate MD5 checksum, iteration 2 00:28:52.706 03:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:52.706 03:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:52.706 03:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:52.706 03:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:52.706 03:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:52.706 03:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:52.706 [2024-11-29 03:16:08.545536] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:52.706 [2024-11-29 03:16:08.545806] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94113 ] 00:28:52.706 [2024-11-29 03:16:08.688075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.964 [2024-11-29 03:16:08.704362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.339  [2024-11-29T03:16:10.591Z] Copying: 668/1024 [MB] (668 MBps) [2024-11-29T03:16:13.135Z] Copying: 1024/1024 [MB] (average 673 MBps) 00:28:57.143 00:28:57.143 03:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:57.143 03:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:59.060 03:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:59.060 03:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=4ffaa7638e45eef5ba8fcb7675040ce0 00:28:59.060 03:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:59.060 03:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:59.060 03:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:59.060 [2024-11-29 03:16:14.964143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.060 [2024-11-29 03:16:14.964199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:59.060 [2024-11-29 03:16:14.964212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:59.060 [2024-11-29 03:16:14.964222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.060 [2024-11-29 03:16:14.964240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.060 [2024-11-29 03:16:14.964247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:59.060 [2024-11-29 03:16:14.964254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:59.060 [2024-11-29 03:16:14.964260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.060 [2024-11-29 03:16:14.964276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.060 [2024-11-29 03:16:14.964285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:59.060 [2024-11-29 03:16:14.964292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:59.060 [2024-11-29 03:16:14.964301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.060 [2024-11-29 03:16:14.964356] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.202 ms, result 0 00:28:59.060 true 00:28:59.060 03:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:59.319 { 00:28:59.319 "name": "ftl", 00:28:59.319 "properties": [ 00:28:59.319 { 00:28:59.319 "name": "superblock_version", 00:28:59.319 "value": 5, 00:28:59.319 "read-only": true 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "name": "base_device", 00:28:59.319 "bands": [ 00:28:59.319 { 00:28:59.319 "id": 0, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 1, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 2, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 3, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 4, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 5, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 6, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 7, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 8, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 9, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 10, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 11, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 12, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 13, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 14, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 15, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 16, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 17, 00:28:59.319 "state": "FREE", 00:28:59.319 "validity": 0.0 00:28:59.319 } 00:28:59.319 ], 00:28:59.319 "read-only": true 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "name": "cache_device", 00:28:59.319 "type": "bdev", 00:28:59.319 "chunks": [ 00:28:59.319 { 00:28:59.319 "id": 0, 00:28:59.319 "state": "INACTIVE", 00:28:59.319 "utilization": 0.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 1, 00:28:59.319 "state": "CLOSED", 00:28:59.319 "utilization": 1.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 2, 00:28:59.319 "state": "CLOSED", 00:28:59.319 "utilization": 1.0 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 3, 00:28:59.319 "state": "OPEN", 00:28:59.319 "utilization": 0.001953125 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "id": 4, 00:28:59.319 "state": "OPEN", 00:28:59.319 "utilization": 0.0 00:28:59.319 } 00:28:59.319 ], 00:28:59.319 "read-only": true 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "name": "verbose_mode", 00:28:59.319 "value": true, 00:28:59.319 "unit": "", 00:28:59.319 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:59.319 }, 00:28:59.319 { 00:28:59.319 "name": "prep_upgrade_on_shutdown", 00:28:59.319 "value": false, 00:28:59.319 "unit": "", 00:28:59.319 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:59.319 } 00:28:59.319 ] 00:28:59.319 } 00:28:59.319 03:16:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:59.579 [2024-11-29 03:16:15.328426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.579 [2024-11-29 03:16:15.328459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:59.579 [2024-11-29 03:16:15.328467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:59.579 [2024-11-29 03:16:15.328473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.579 [2024-11-29 03:16:15.328490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.579 [2024-11-29 03:16:15.328496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:59.579 [2024-11-29 03:16:15.328502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:59.579 [2024-11-29 03:16:15.328508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.579 [2024-11-29 03:16:15.328523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.579 [2024-11-29 03:16:15.328529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:59.579 [2024-11-29 03:16:15.328536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:59.579 [2024-11-29 03:16:15.328541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.579 [2024-11-29 03:16:15.328589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.147 ms, result 0 00:28:59.579 true 00:28:59.579 03:16:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:59.579 03:16:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:59.579 03:16:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:59.579 03:16:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:59.579 03:16:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:59.579 03:16:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:59.838 [2024-11-29 03:16:15.736769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.838 [2024-11-29 03:16:15.736798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:59.838 [2024-11-29 03:16:15.736806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:59.838 [2024-11-29 03:16:15.736812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.838 [2024-11-29 03:16:15.736839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.838 [2024-11-29 03:16:15.736845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:59.838 [2024-11-29 03:16:15.736851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:59.838 [2024-11-29 03:16:15.736856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.838 [2024-11-29 03:16:15.736871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.838 [2024-11-29 03:16:15.736877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:59.838 [2024-11-29 03:16:15.736883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:59.838 [2024-11-29 03:16:15.736888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.838 [2024-11-29 03:16:15.736929] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.148 ms, result 0 00:28:59.838 true 00:28:59.839 03:16:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:00.100 { 00:29:00.100 "name": "ftl", 00:29:00.100 "properties": [ 00:29:00.100 { 00:29:00.100 "name": "superblock_version", 00:29:00.100 "value": 5, 00:29:00.100 "read-only": true 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "name": "base_device", 00:29:00.100 "bands": [ 00:29:00.100 { 00:29:00.100 "id": 0, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 1, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 2, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 3, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 4, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 5, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 6, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 7, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 8, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 9, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 10, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 11, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 12, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 13, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 14, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 15, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 16, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 17, 00:29:00.100 "state": "FREE", 00:29:00.100 "validity": 0.0 00:29:00.100 } 00:29:00.100 ], 00:29:00.100 "read-only": true 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "name": "cache_device", 00:29:00.100 "type": "bdev", 00:29:00.100 "chunks": [ 00:29:00.100 { 00:29:00.100 "id": 0, 00:29:00.100 "state": "INACTIVE", 00:29:00.100 "utilization": 0.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 1, 00:29:00.100 "state": "CLOSED", 00:29:00.100 "utilization": 1.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 2, 00:29:00.100 "state": "CLOSED", 00:29:00.100 "utilization": 1.0 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 3, 00:29:00.100 "state": "OPEN", 00:29:00.100 "utilization": 0.001953125 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "id": 4, 00:29:00.100 "state": "OPEN", 00:29:00.100 "utilization": 0.0 00:29:00.100 } 00:29:00.100 ], 00:29:00.100 "read-only": true 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "name": "verbose_mode", 00:29:00.100 "value": true, 00:29:00.100 "unit": "", 00:29:00.100 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:00.100 }, 00:29:00.100 { 00:29:00.100 "name": "prep_upgrade_on_shutdown", 00:29:00.100 "value": true, 00:29:00.100 "unit": "", 00:29:00.100 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:00.100 } 00:29:00.100 ] 00:29:00.100 } 00:29:00.100 03:16:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:00.100 03:16:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93800 ]] 00:29:00.100 03:16:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93800 00:29:00.100 03:16:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93800 ']' 00:29:00.100 03:16:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93800 00:29:00.100 03:16:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:00.101 03:16:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:00.101 03:16:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93800 00:29:00.101 killing process with pid 93800 00:29:00.101 03:16:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:00.101 03:16:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:00.101 03:16:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93800' 00:29:00.101 03:16:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93800 00:29:00.101 03:16:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93800 00:29:00.101 [2024-11-29 03:16:16.059594] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:00.101 [2024-11-29 03:16:16.066167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.101 [2024-11-29 03:16:16.066200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:00.101 [2024-11-29 03:16:16.066211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:00.101 [2024-11-29 03:16:16.066219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.101 [2024-11-29 03:16:16.066238] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:00.101 [2024-11-29 03:16:16.066760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.101 [2024-11-29 03:16:16.066783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:00.101 [2024-11-29 03:16:16.066792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.510 ms 00:29:00.101 [2024-11-29 03:16:16.066799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.250 [2024-11-29 03:16:24.003433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.250 [2024-11-29 03:16:24.003497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:08.250 [2024-11-29 03:16:24.003511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7936.589 ms 00:29:08.250 [2024-11-29 03:16:24.003518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.250 [2024-11-29 03:16:24.005044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.251 [2024-11-29 03:16:24.005070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:08.251 [2024-11-29 03:16:24.005078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.512 ms 00:29:08.251 [2024-11-29 03:16:24.005085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.005965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.251 [2024-11-29 03:16:24.005987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:08.251 [2024-11-29 03:16:24.005995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.859 ms 00:29:08.251 [2024-11-29 03:16:24.006002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.008510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.251 [2024-11-29 03:16:24.008546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:08.251 [2024-11-29 03:16:24.008554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.477 ms 00:29:08.251 [2024-11-29 03:16:24.008560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.011530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.251 [2024-11-29 03:16:24.011560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:08.251 [2024-11-29 03:16:24.011568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.944 ms 00:29:08.251 [2024-11-29 03:16:24.011579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.011642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.251 [2024-11-29 03:16:24.011650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:08.251 [2024-11-29 03:16:24.011657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:08.251 [2024-11-29 03:16:24.011664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.013487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.251 [2024-11-29 03:16:24.013514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:08.251 [2024-11-29 03:16:24.013522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.810 ms 00:29:08.251 [2024-11-29 03:16:24.013527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.015552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.251 [2024-11-29 03:16:24.015577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:08.251 [2024-11-29 03:16:24.015584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.001 ms 00:29:08.251 [2024-11-29 03:16:24.015589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.017209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.251 [2024-11-29 03:16:24.017346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:08.251 [2024-11-29 03:16:24.017357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.595 ms 00:29:08.251 [2024-11-29 03:16:24.017363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.019911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.251 [2024-11-29 03:16:24.020022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:08.251 [2024-11-29 03:16:24.020055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.490 ms 00:29:08.251 [2024-11-29 03:16:24.020077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.020161] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:08.251 [2024-11-29 03:16:24.020223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:08.251 [2024-11-29 03:16:24.020254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:08.251 [2024-11-29 03:16:24.020279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:08.251 [2024-11-29 03:16:24.020304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:08.251 [2024-11-29 03:16:24.020328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:08.251 [2024-11-29 03:16:24.020352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:08.251 [2024-11-29 03:16:24.020376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:08.251 [2024-11-29 03:16:24.020400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:08.251 [2024-11-29 03:16:24.020424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:08.251 [2024-11-29 03:16:24.020447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:08.251 [2024-11-29 03:16:24.020471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:08.251 [2024-11-29 03:16:24.020495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:08.251 [2024-11-29 03:16:24.020519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:08.251 [2024-11-29 03:16:24.020543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:08.251 [2024-11-29 03:16:24.020566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:08.251 [2024-11-29 03:16:24.020590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:08.251 [2024-11-29 03:16:24.020613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:08.251 [2024-11-29 03:16:24.020637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:08.251 [2024-11-29 03:16:24.020666] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:08.251 [2024-11-29 03:16:24.020690] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: ad7a6e26-5f99-4564-b944-47558ba162fc 00:29:08.251 [2024-11-29 03:16:24.020713] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:08.251 [2024-11-29 03:16:24.020744] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:08.251 [2024-11-29 03:16:24.020765] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:08.251 [2024-11-29 03:16:24.020788] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:08.251 [2024-11-29 03:16:24.020809] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:08.251 [2024-11-29 03:16:24.021173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:08.251 [2024-11-29 03:16:24.021280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:08.251 [2024-11-29 03:16:24.021342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:08.251 [2024-11-29 03:16:24.021567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:08.251 [2024-11-29 03:16:24.021641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.251 [2024-11-29 03:16:24.021703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:08.251 [2024-11-29 03:16:24.021767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.481 ms 00:29:08.251 [2024-11-29 03:16:24.023955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.026995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.251 [2024-11-29 03:16:24.027073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:08.251 [2024-11-29 03:16:24.027104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.849 ms 00:29:08.251 [2024-11-29 03:16:24.027127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.027360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.251 [2024-11-29 03:16:24.027402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:08.251 [2024-11-29 03:16:24.027428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.125 ms 00:29:08.251 [2024-11-29 03:16:24.027448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.036103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:08.251 [2024-11-29 03:16:24.036143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:08.251 [2024-11-29 03:16:24.036153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:08.251 [2024-11-29 03:16:24.036160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.036191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:08.251 [2024-11-29 03:16:24.036199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:08.251 [2024-11-29 03:16:24.036206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:08.251 [2024-11-29 03:16:24.036214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.036274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:08.251 [2024-11-29 03:16:24.036284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:08.251 [2024-11-29 03:16:24.036292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:08.251 [2024-11-29 03:16:24.036299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.036316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:08.251 [2024-11-29 03:16:24.036324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:08.251 [2024-11-29 03:16:24.036334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:08.251 [2024-11-29 03:16:24.036341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.045589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:08.251 [2024-11-29 03:16:24.045625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:08.251 [2024-11-29 03:16:24.045635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:08.251 [2024-11-29 03:16:24.045643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.053117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:08.251 [2024-11-29 03:16:24.053266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:08.251 [2024-11-29 03:16:24.053282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:08.251 [2024-11-29 03:16:24.053290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.053340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:08.251 [2024-11-29 03:16:24.053352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:08.251 [2024-11-29 03:16:24.053360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:08.251 [2024-11-29 03:16:24.053367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.251 [2024-11-29 03:16:24.053415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:08.252 [2024-11-29 03:16:24.053424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:08.252 [2024-11-29 03:16:24.053432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:08.252 [2024-11-29 03:16:24.053439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.252 [2024-11-29 03:16:24.053505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:08.252 [2024-11-29 03:16:24.053514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:08.252 [2024-11-29 03:16:24.053525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:08.252 [2024-11-29 03:16:24.053533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.252 [2024-11-29 03:16:24.053561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:08.252 [2024-11-29 03:16:24.053570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:08.252 [2024-11-29 03:16:24.053577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:08.252 [2024-11-29 03:16:24.053585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.252 [2024-11-29 03:16:24.053626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:08.252 [2024-11-29 03:16:24.053635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:08.252 [2024-11-29 03:16:24.053645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:08.252 [2024-11-29 03:16:24.053652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.252 [2024-11-29 03:16:24.053695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:08.252 [2024-11-29 03:16:24.053705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:08.252 [2024-11-29 03:16:24.053713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:08.252 [2024-11-29 03:16:24.053720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.252 [2024-11-29 03:16:24.053865] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7987.619 ms, result 0 00:29:13.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.566 03:16:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:13.566 03:16:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:13.566 03:16:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:13.566 03:16:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:13.566 03:16:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:13.566 03:16:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94301 00:29:13.566 03:16:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:13.566 03:16:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94301 00:29:13.566 03:16:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94301 ']' 00:29:13.566 03:16:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.566 03:16:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.566 03:16:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.566 03:16:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.566 03:16:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:13.566 03:16:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:13.566 [2024-11-29 03:16:29.297253] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:13.566 [2024-11-29 03:16:29.297591] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94301 ] 00:29:13.566 [2024-11-29 03:16:29.445222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.566 [2024-11-29 03:16:29.475184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.140 [2024-11-29 03:16:29.818578] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:14.140 [2024-11-29 03:16:29.818677] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:14.140 [2024-11-29 03:16:29.972219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.140 [2024-11-29 03:16:29.972286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:14.140 [2024-11-29 03:16:29.972309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:14.140 [2024-11-29 03:16:29.972317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.140 [2024-11-29 03:16:29.972379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.140 [2024-11-29 03:16:29.972392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:14.140 [2024-11-29 03:16:29.972401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:29:14.140 [2024-11-29 03:16:29.972409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.140 [2024-11-29 03:16:29.972433] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:14.140 [2024-11-29 03:16:29.972708] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:14.140 [2024-11-29 03:16:29.972724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.140 [2024-11-29 03:16:29.972736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:14.140 [2024-11-29 03:16:29.972745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.297 ms 00:29:14.140 [2024-11-29 03:16:29.972753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.140 [2024-11-29 03:16:29.974579] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:14.140 [2024-11-29 03:16:29.978540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.140 [2024-11-29 03:16:29.978756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:14.140 [2024-11-29 03:16:29.978778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.962 ms 00:29:14.140 [2024-11-29 03:16:29.978786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.140 [2024-11-29 03:16:29.978980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.140 [2024-11-29 03:16:29.979012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:14.140 [2024-11-29 03:16:29.979024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:29:14.140 [2024-11-29 03:16:29.979032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.140 [2024-11-29 03:16:29.987374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.140 [2024-11-29 03:16:29.987421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:14.140 [2024-11-29 03:16:29.987432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.269 ms 00:29:14.140 [2024-11-29 03:16:29.987440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.140 [2024-11-29 03:16:29.987493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.140 [2024-11-29 03:16:29.987502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:14.140 [2024-11-29 03:16:29.987510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:29:14.140 [2024-11-29 03:16:29.987522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.140 [2024-11-29 03:16:29.987590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.140 [2024-11-29 03:16:29.987604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:14.140 [2024-11-29 03:16:29.987613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:14.140 [2024-11-29 03:16:29.987623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.140 [2024-11-29 03:16:29.987651] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:14.140 [2024-11-29 03:16:29.989627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.140 [2024-11-29 03:16:29.989804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:14.140 [2024-11-29 03:16:29.989846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.982 ms 00:29:14.140 [2024-11-29 03:16:29.989854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.140 [2024-11-29 03:16:29.989902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.140 [2024-11-29 03:16:29.989911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:14.140 [2024-11-29 03:16:29.989920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:14.140 [2024-11-29 03:16:29.989928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.140 [2024-11-29 03:16:29.989962] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:14.140 [2024-11-29 03:16:29.989983] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:14.140 [2024-11-29 03:16:29.990020] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:14.140 [2024-11-29 03:16:29.990043] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:14.140 [2024-11-29 03:16:29.990152] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:14.140 [2024-11-29 03:16:29.990163] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:14.140 [2024-11-29 03:16:29.990174] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:14.140 [2024-11-29 03:16:29.990184] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:14.140 [2024-11-29 03:16:29.990193] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:14.140 [2024-11-29 03:16:29.990202] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:14.140 [2024-11-29 03:16:29.990209] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:14.140 [2024-11-29 03:16:29.990217] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:14.140 [2024-11-29 03:16:29.990224] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:14.140 [2024-11-29 03:16:29.990232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.140 [2024-11-29 03:16:29.990242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:14.140 [2024-11-29 03:16:29.990251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.273 ms 00:29:14.140 [2024-11-29 03:16:29.990258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.140 [2024-11-29 03:16:29.990343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.140 [2024-11-29 03:16:29.990352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:14.140 [2024-11-29 03:16:29.990364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:29:14.140 [2024-11-29 03:16:29.990372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.140 [2024-11-29 03:16:29.990474] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:14.141 [2024-11-29 03:16:29.990485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:14.141 [2024-11-29 03:16:29.990497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:14.141 [2024-11-29 03:16:29.990506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.141 [2024-11-29 03:16:29.990516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:14.141 [2024-11-29 03:16:29.990523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:14.141 [2024-11-29 03:16:29.990532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:14.141 [2024-11-29 03:16:29.990539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:14.141 [2024-11-29 03:16:29.990549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:14.141 [2024-11-29 03:16:29.990557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.141 [2024-11-29 03:16:29.990565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:14.141 [2024-11-29 03:16:29.990573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:14.141 [2024-11-29 03:16:29.990580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.141 [2024-11-29 03:16:29.990588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:14.141 [2024-11-29 03:16:29.990604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:14.141 [2024-11-29 03:16:29.990612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.141 [2024-11-29 03:16:29.990621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:14.141 [2024-11-29 03:16:29.990628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:14.141 [2024-11-29 03:16:29.990636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.141 [2024-11-29 03:16:29.990644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:14.141 [2024-11-29 03:16:29.990652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:14.141 [2024-11-29 03:16:29.990659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:14.141 [2024-11-29 03:16:29.990666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:14.141 [2024-11-29 03:16:29.990674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:14.141 [2024-11-29 03:16:29.990681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:14.141 [2024-11-29 03:16:29.990688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:14.141 [2024-11-29 03:16:29.990695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:14.141 [2024-11-29 03:16:29.990703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:14.141 [2024-11-29 03:16:29.990710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:14.141 [2024-11-29 03:16:29.990716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:14.141 [2024-11-29 03:16:29.990725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:14.141 [2024-11-29 03:16:29.990732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:14.141 [2024-11-29 03:16:29.990739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:14.141 [2024-11-29 03:16:29.990745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.141 [2024-11-29 03:16:29.990751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:14.141 [2024-11-29 03:16:29.990757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:14.141 [2024-11-29 03:16:29.990764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.141 [2024-11-29 03:16:29.990770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:14.141 [2024-11-29 03:16:29.990777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:14.141 [2024-11-29 03:16:29.990783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.141 [2024-11-29 03:16:29.990790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:14.141 [2024-11-29 03:16:29.990796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:14.141 [2024-11-29 03:16:29.990802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.141 [2024-11-29 03:16:29.990808] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:14.141 [2024-11-29 03:16:29.990820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:14.141 [2024-11-29 03:16:29.990844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:14.141 [2024-11-29 03:16:29.990855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:14.141 [2024-11-29 03:16:29.990864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:14.141 [2024-11-29 03:16:29.990872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:14.141 [2024-11-29 03:16:29.990878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:14.141 [2024-11-29 03:16:29.990886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:14.141 [2024-11-29 03:16:29.990893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:14.141 [2024-11-29 03:16:29.990900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:14.141 [2024-11-29 03:16:29.990908] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:14.141 [2024-11-29 03:16:29.990918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:14.141 [2024-11-29 03:16:29.990927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:14.141 [2024-11-29 03:16:29.990935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:14.141 [2024-11-29 03:16:29.990942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:14.141 [2024-11-29 03:16:29.990950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:14.141 [2024-11-29 03:16:29.990957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:14.141 [2024-11-29 03:16:29.990965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:14.141 [2024-11-29 03:16:29.990984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:14.141 [2024-11-29 03:16:29.990997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:14.141 [2024-11-29 03:16:29.991004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:14.141 [2024-11-29 03:16:29.991011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:14.141 [2024-11-29 03:16:29.991018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:14.141 [2024-11-29 03:16:29.991025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:14.141 [2024-11-29 03:16:29.991032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:14.141 [2024-11-29 03:16:29.991039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:14.141 [2024-11-29 03:16:29.991046] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:14.141 [2024-11-29 03:16:29.991055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:14.141 [2024-11-29 03:16:29.991063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:14.141 [2024-11-29 03:16:29.991070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:14.141 [2024-11-29 03:16:29.991076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:14.141 [2024-11-29 03:16:29.991090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:14.141 [2024-11-29 03:16:29.991099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.141 [2024-11-29 03:16:29.991108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:14.141 [2024-11-29 03:16:29.991120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.692 ms 00:29:14.141 [2024-11-29 03:16:29.991131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.141 [2024-11-29 03:16:29.991173] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:14.141 [2024-11-29 03:16:29.991183] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:18.358 [2024-11-29 03:16:33.504362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.504799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:18.358 [2024-11-29 03:16:33.504877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3513.174 ms 00:29:18.358 [2024-11-29 03:16:33.504936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.358 [2024-11-29 03:16:33.516880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.516934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:18.358 [2024-11-29 03:16:33.516947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.795 ms 00:29:18.358 [2024-11-29 03:16:33.516959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.358 [2024-11-29 03:16:33.517009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.517021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:18.358 [2024-11-29 03:16:33.517030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:18.358 [2024-11-29 03:16:33.517041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.358 [2024-11-29 03:16:33.527772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.528004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:18.358 [2024-11-29 03:16:33.528024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.676 ms 00:29:18.358 [2024-11-29 03:16:33.528033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.358 [2024-11-29 03:16:33.528082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.528091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:18.358 [2024-11-29 03:16:33.528105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:18.358 [2024-11-29 03:16:33.528112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.358 [2024-11-29 03:16:33.528583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.528605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:18.358 [2024-11-29 03:16:33.528624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.416 ms 00:29:18.358 [2024-11-29 03:16:33.528634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.358 [2024-11-29 03:16:33.528679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.528691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:18.358 [2024-11-29 03:16:33.528701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:29:18.358 [2024-11-29 03:16:33.528713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.358 [2024-11-29 03:16:33.536032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.536071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:18.358 [2024-11-29 03:16:33.536091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.294 ms 00:29:18.358 [2024-11-29 03:16:33.536099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.358 [2024-11-29 03:16:33.548302] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:18.358 [2024-11-29 03:16:33.548545] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:18.358 [2024-11-29 03:16:33.548583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.548597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:18.358 [2024-11-29 03:16:33.548613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.392 ms 00:29:18.358 [2024-11-29 03:16:33.548625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.358 [2024-11-29 03:16:33.559772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.559932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:18.358 [2024-11-29 03:16:33.559972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.094 ms 00:29:18.358 [2024-11-29 03:16:33.559995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.358 [2024-11-29 03:16:33.563680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.563913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:18.358 [2024-11-29 03:16:33.563935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.538 ms 00:29:18.358 [2024-11-29 03:16:33.563944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.358 [2024-11-29 03:16:33.566587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.566641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:18.358 [2024-11-29 03:16:33.566654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.604 ms 00:29:18.358 [2024-11-29 03:16:33.566663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.358 [2024-11-29 03:16:33.567135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.567162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:18.358 [2024-11-29 03:16:33.567174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.392 ms 00:29:18.358 [2024-11-29 03:16:33.567182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.358 [2024-11-29 03:16:33.598388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.598620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:18.358 [2024-11-29 03:16:33.598642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.176 ms 00:29:18.358 [2024-11-29 03:16:33.598653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.358 [2024-11-29 03:16:33.607459] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:18.358 [2024-11-29 03:16:33.608702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.608747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:18.358 [2024-11-29 03:16:33.608760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.902 ms 00:29:18.358 [2024-11-29 03:16:33.608769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.358 [2024-11-29 03:16:33.608880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.358 [2024-11-29 03:16:33.608894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:18.358 [2024-11-29 03:16:33.608912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:18.359 [2024-11-29 03:16:33.608925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.359 [2024-11-29 03:16:33.608997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.359 [2024-11-29 03:16:33.609009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:18.359 [2024-11-29 03:16:33.609050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:18.359 [2024-11-29 03:16:33.609060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.359 [2024-11-29 03:16:33.609090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.359 [2024-11-29 03:16:33.609102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:18.359 [2024-11-29 03:16:33.609111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:18.359 [2024-11-29 03:16:33.609125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.359 [2024-11-29 03:16:33.609170] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:18.359 [2024-11-29 03:16:33.609183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.359 [2024-11-29 03:16:33.609193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:18.359 [2024-11-29 03:16:33.609204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:18.359 [2024-11-29 03:16:33.609213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.359 [2024-11-29 03:16:33.614592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.359 [2024-11-29 03:16:33.614647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:18.359 [2024-11-29 03:16:33.614673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.351 ms 00:29:18.359 [2024-11-29 03:16:33.614683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.359 [2024-11-29 03:16:33.614774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.359 [2024-11-29 03:16:33.614786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:18.359 [2024-11-29 03:16:33.614797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:29:18.359 [2024-11-29 03:16:33.614811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.359 [2024-11-29 03:16:33.616283] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3643.472 ms, result 0 00:29:18.359 [2024-11-29 03:16:33.629248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.359 [2024-11-29 03:16:33.645301] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:18.359 [2024-11-29 03:16:33.653434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:18.359 03:16:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:18.359 03:16:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:18.359 03:16:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:18.359 03:16:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:18.359 03:16:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:18.359 [2024-11-29 03:16:33.885416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.359 [2024-11-29 03:16:33.885475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:18.359 [2024-11-29 03:16:33.885488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:18.359 [2024-11-29 03:16:33.885498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.359 [2024-11-29 03:16:33.885524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.359 [2024-11-29 03:16:33.885534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:18.359 [2024-11-29 03:16:33.885546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:18.359 [2024-11-29 03:16:33.885554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.359 [2024-11-29 03:16:33.885577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.359 [2024-11-29 03:16:33.885586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:18.359 [2024-11-29 03:16:33.885595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:18.359 [2024-11-29 03:16:33.885603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.359 [2024-11-29 03:16:33.885661] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.238 ms, result 0 00:29:18.359 true 00:29:18.359 03:16:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:18.359 { 00:29:18.359 "name": "ftl", 00:29:18.359 "properties": [ 00:29:18.359 { 00:29:18.359 "name": "superblock_version", 00:29:18.359 "value": 5, 00:29:18.359 "read-only": true 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "name": "base_device", 00:29:18.359 "bands": [ 00:29:18.359 { 00:29:18.359 "id": 0, 00:29:18.359 "state": "CLOSED", 00:29:18.359 "validity": 1.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 1, 00:29:18.359 "state": "CLOSED", 00:29:18.359 "validity": 1.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 2, 00:29:18.359 "state": "CLOSED", 00:29:18.359 "validity": 0.007843137254901933 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 3, 00:29:18.359 "state": "FREE", 00:29:18.359 "validity": 0.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 4, 00:29:18.359 "state": "FREE", 00:29:18.359 "validity": 0.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 5, 00:29:18.359 "state": "FREE", 00:29:18.359 "validity": 0.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 6, 00:29:18.359 "state": "FREE", 00:29:18.359 "validity": 0.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 7, 00:29:18.359 "state": "FREE", 00:29:18.359 "validity": 0.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 8, 00:29:18.359 "state": "FREE", 00:29:18.359 "validity": 0.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 9, 00:29:18.359 "state": "FREE", 00:29:18.359 "validity": 0.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 10, 00:29:18.359 "state": "FREE", 00:29:18.359 "validity": 0.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 11, 00:29:18.359 "state": "FREE", 00:29:18.359 "validity": 0.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 12, 00:29:18.359 "state": "FREE", 00:29:18.359 "validity": 0.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 13, 00:29:18.359 "state": "FREE", 00:29:18.359 "validity": 0.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 14, 00:29:18.359 "state": "FREE", 00:29:18.359 "validity": 0.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 15, 00:29:18.359 "state": "FREE", 00:29:18.359 "validity": 0.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 16, 00:29:18.359 "state": "FREE", 00:29:18.359 "validity": 0.0 00:29:18.359 }, 00:29:18.359 { 00:29:18.359 "id": 17, 00:29:18.359 "state": "FREE", 00:29:18.359 "validity": 0.0 00:29:18.359 } 00:29:18.360 ], 00:29:18.360 "read-only": true 00:29:18.360 }, 00:29:18.360 { 00:29:18.360 "name": "cache_device", 00:29:18.360 "type": "bdev", 00:29:18.360 "chunks": [ 00:29:18.360 { 00:29:18.360 "id": 0, 00:29:18.360 "state": "INACTIVE", 00:29:18.360 "utilization": 0.0 00:29:18.360 }, 00:29:18.360 { 00:29:18.360 "id": 1, 00:29:18.360 "state": "OPEN", 00:29:18.360 "utilization": 0.0 00:29:18.360 }, 00:29:18.360 { 00:29:18.360 "id": 2, 00:29:18.360 "state": "OPEN", 00:29:18.360 "utilization": 0.0 00:29:18.360 }, 00:29:18.360 { 00:29:18.360 "id": 3, 00:29:18.360 "state": "FREE", 00:29:18.360 "utilization": 0.0 00:29:18.360 }, 00:29:18.360 { 00:29:18.360 "id": 4, 00:29:18.360 "state": "FREE", 00:29:18.360 "utilization": 0.0 00:29:18.360 } 00:29:18.360 ], 00:29:18.360 "read-only": true 00:29:18.360 }, 00:29:18.360 { 00:29:18.360 "name": "verbose_mode", 00:29:18.360 "value": true, 00:29:18.360 "unit": "", 00:29:18.360 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:18.360 }, 00:29:18.360 { 00:29:18.360 "name": "prep_upgrade_on_shutdown", 00:29:18.360 "value": false, 00:29:18.360 "unit": "", 00:29:18.360 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:18.360 } 00:29:18.360 ] 00:29:18.360 } 00:29:18.360 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:18.360 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:18.360 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:18.360 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:18.360 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:18.360 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:18.360 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:18.360 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:18.621 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:18.621 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:18.621 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:18.621 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:18.621 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:18.621 Validate MD5 checksum, iteration 1 00:29:18.621 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:18.621 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:18.621 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:18.621 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:18.621 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:18.621 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:18.621 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:18.621 03:16:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:18.883 [2024-11-29 03:16:34.615488] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:18.883 [2024-11-29 03:16:34.615604] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94372 ] 00:29:18.883 [2024-11-29 03:16:34.760321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.883 [2024-11-29 03:16:34.778919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.267  [2024-11-29T03:16:37.197Z] Copying: 629/1024 [MB] (629 MBps) [2024-11-29T03:16:37.766Z] Copying: 1024/1024 [MB] (average 577 MBps) 00:29:21.774 00:29:21.774 03:16:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:21.774 03:16:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:23.689 03:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:23.689 Validate MD5 checksum, iteration 2 00:29:23.689 03:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=75c1f5c7a17aca89aa0e14936e4b6744 00:29:23.689 03:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 75c1f5c7a17aca89aa0e14936e4b6744 != \7\5\c\1\f\5\c\7\a\1\7\a\c\a\8\9\a\a\0\e\1\4\9\3\6\e\4\b\6\7\4\4 ]] 00:29:23.689 03:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:23.689 03:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:23.689 03:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:23.689 03:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:23.689 03:16:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:23.689 03:16:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:23.689 03:16:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:23.689 03:16:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:23.689 03:16:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:23.689 [2024-11-29 03:16:39.667505] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:23.689 [2024-11-29 03:16:39.667686] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94428 ] 00:29:23.949 [2024-11-29 03:16:39.812251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.949 [2024-11-29 03:16:39.833787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.324  [2024-11-29T03:16:41.575Z] Copying: 786/1024 [MB] (786 MBps) [2024-11-29T03:16:44.869Z] Copying: 1024/1024 [MB] (average 755 MBps) 00:29:28.877 00:29:28.877 03:16:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:28.878 03:16:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4ffaa7638e45eef5ba8fcb7675040ce0 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4ffaa7638e45eef5ba8fcb7675040ce0 != \4\f\f\a\a\7\6\3\8\e\4\5\e\e\f\5\b\a\8\f\c\b\7\6\7\5\0\4\0\c\e\0 ]] 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 94301 ]] 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 94301 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94500 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94500 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94500 ']' 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:30.252 03:16:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:30.252 [2024-11-29 03:16:46.221638] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:30.252 [2024-11-29 03:16:46.221754] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94500 ] 00:29:30.511 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 94301 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:30.511 [2024-11-29 03:16:46.363086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.511 [2024-11-29 03:16:46.379519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.770 [2024-11-29 03:16:46.632366] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:30.770 [2024-11-29 03:16:46.632414] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:31.033 [2024-11-29 03:16:46.770389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.033 [2024-11-29 03:16:46.770436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:31.033 [2024-11-29 03:16:46.770455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:31.033 [2024-11-29 03:16:46.770466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.033 [2024-11-29 03:16:46.770537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.033 [2024-11-29 03:16:46.770554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:31.033 [2024-11-29 03:16:46.770565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:29:31.033 [2024-11-29 03:16:46.770575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.033 [2024-11-29 03:16:46.770604] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:31.033 [2024-11-29 03:16:46.770939] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:31.033 [2024-11-29 03:16:46.770961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.033 [2024-11-29 03:16:46.770972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:31.033 [2024-11-29 03:16:46.770983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.363 ms 00:29:31.033 [2024-11-29 03:16:46.770993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.033 [2024-11-29 03:16:46.771316] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:31.033 [2024-11-29 03:16:46.775224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.033 [2024-11-29 03:16:46.775258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:31.033 [2024-11-29 03:16:46.775272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.909 ms 00:29:31.033 [2024-11-29 03:16:46.775279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.033 [2024-11-29 03:16:46.776196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.033 [2024-11-29 03:16:46.776223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:31.033 [2024-11-29 03:16:46.776233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:29:31.033 [2024-11-29 03:16:46.776241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.033 [2024-11-29 03:16:46.776494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.033 [2024-11-29 03:16:46.776505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:31.034 [2024-11-29 03:16:46.776513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:29:31.034 [2024-11-29 03:16:46.776521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.034 [2024-11-29 03:16:46.776553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.034 [2024-11-29 03:16:46.776565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:31.034 [2024-11-29 03:16:46.776573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:31.034 [2024-11-29 03:16:46.776583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.034 [2024-11-29 03:16:46.776614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.034 [2024-11-29 03:16:46.776626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:31.034 [2024-11-29 03:16:46.776633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:31.034 [2024-11-29 03:16:46.776643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.034 [2024-11-29 03:16:46.776665] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:31.034 [2024-11-29 03:16:46.777509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.034 [2024-11-29 03:16:46.777525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:31.034 [2024-11-29 03:16:46.777533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.848 ms 00:29:31.034 [2024-11-29 03:16:46.777540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.034 [2024-11-29 03:16:46.777568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.034 [2024-11-29 03:16:46.777581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:31.034 [2024-11-29 03:16:46.777590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:31.034 [2024-11-29 03:16:46.777597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.034 [2024-11-29 03:16:46.777626] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:31.034 [2024-11-29 03:16:46.777643] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:31.034 [2024-11-29 03:16:46.777676] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:31.034 [2024-11-29 03:16:46.777691] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:31.034 [2024-11-29 03:16:46.777795] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:31.034 [2024-11-29 03:16:46.777809] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:31.034 [2024-11-29 03:16:46.777820] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:31.034 [2024-11-29 03:16:46.777848] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:31.034 [2024-11-29 03:16:46.777856] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:31.034 [2024-11-29 03:16:46.777864] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:31.034 [2024-11-29 03:16:46.777872] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:31.034 [2024-11-29 03:16:46.777879] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:31.034 [2024-11-29 03:16:46.777898] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:31.034 [2024-11-29 03:16:46.777906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.034 [2024-11-29 03:16:46.777916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:31.034 [2024-11-29 03:16:46.777924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.282 ms 00:29:31.034 [2024-11-29 03:16:46.777932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.034 [2024-11-29 03:16:46.778016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.034 [2024-11-29 03:16:46.778024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:31.034 [2024-11-29 03:16:46.778037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:29:31.034 [2024-11-29 03:16:46.778044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.034 [2024-11-29 03:16:46.778143] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:31.034 [2024-11-29 03:16:46.778153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:31.034 [2024-11-29 03:16:46.778162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:31.034 [2024-11-29 03:16:46.778176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:31.034 [2024-11-29 03:16:46.778185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:31.034 [2024-11-29 03:16:46.778193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:31.034 [2024-11-29 03:16:46.778201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:31.034 [2024-11-29 03:16:46.778210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:31.034 [2024-11-29 03:16:46.778218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:31.034 [2024-11-29 03:16:46.778226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:31.034 [2024-11-29 03:16:46.778235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:31.034 [2024-11-29 03:16:46.778243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:31.034 [2024-11-29 03:16:46.778250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:31.034 [2024-11-29 03:16:46.778257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:31.034 [2024-11-29 03:16:46.778268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:31.034 [2024-11-29 03:16:46.778276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:31.034 [2024-11-29 03:16:46.778284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:31.034 [2024-11-29 03:16:46.778291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:31.034 [2024-11-29 03:16:46.778298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:31.034 [2024-11-29 03:16:46.778306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:31.034 [2024-11-29 03:16:46.778314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:31.034 [2024-11-29 03:16:46.778321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:31.034 [2024-11-29 03:16:46.778329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:31.034 [2024-11-29 03:16:46.778336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:31.034 [2024-11-29 03:16:46.778343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:31.034 [2024-11-29 03:16:46.778351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:31.034 [2024-11-29 03:16:46.778358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:31.034 [2024-11-29 03:16:46.778365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:31.034 [2024-11-29 03:16:46.778373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:31.034 [2024-11-29 03:16:46.778380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:31.034 [2024-11-29 03:16:46.778390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:31.034 [2024-11-29 03:16:46.778397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:31.034 [2024-11-29 03:16:46.778404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:31.034 [2024-11-29 03:16:46.778412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:31.034 [2024-11-29 03:16:46.778419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:31.034 [2024-11-29 03:16:46.778426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:31.034 [2024-11-29 03:16:46.778433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:31.034 [2024-11-29 03:16:46.778441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:31.034 [2024-11-29 03:16:46.778448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:31.034 [2024-11-29 03:16:46.778455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:31.034 [2024-11-29 03:16:46.778463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:31.034 [2024-11-29 03:16:46.778470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:31.034 [2024-11-29 03:16:46.778479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:31.034 [2024-11-29 03:16:46.778486] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:31.034 [2024-11-29 03:16:46.778495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:31.034 [2024-11-29 03:16:46.778502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:31.034 [2024-11-29 03:16:46.778515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:31.034 [2024-11-29 03:16:46.778524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:31.034 [2024-11-29 03:16:46.778532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:31.034 [2024-11-29 03:16:46.778538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:31.034 [2024-11-29 03:16:46.778544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:31.034 [2024-11-29 03:16:46.778551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:31.034 [2024-11-29 03:16:46.778557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:31.034 [2024-11-29 03:16:46.778565] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:31.034 [2024-11-29 03:16:46.778574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:31.034 [2024-11-29 03:16:46.778582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:31.034 [2024-11-29 03:16:46.778589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:31.034 [2024-11-29 03:16:46.778597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:31.034 [2024-11-29 03:16:46.778604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:31.034 [2024-11-29 03:16:46.778610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:31.034 [2024-11-29 03:16:46.778617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:31.034 [2024-11-29 03:16:46.778625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:31.034 [2024-11-29 03:16:46.778634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:31.035 [2024-11-29 03:16:46.778641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:31.035 [2024-11-29 03:16:46.778647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:31.035 [2024-11-29 03:16:46.778655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:31.035 [2024-11-29 03:16:46.778661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:31.035 [2024-11-29 03:16:46.778668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:31.035 [2024-11-29 03:16:46.778675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:31.035 [2024-11-29 03:16:46.778682] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:31.035 [2024-11-29 03:16:46.778690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:31.035 [2024-11-29 03:16:46.778697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:31.035 [2024-11-29 03:16:46.778709] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:31.035 [2024-11-29 03:16:46.778716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:31.035 [2024-11-29 03:16:46.778724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:31.035 [2024-11-29 03:16:46.778732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.778741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:31.035 [2024-11-29 03:16:46.778747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.657 ms 00:29:31.035 [2024-11-29 03:16:46.778756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.785469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.785498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:31.035 [2024-11-29 03:16:46.785510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.654 ms 00:29:31.035 [2024-11-29 03:16:46.785518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.785552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.785560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:31.035 [2024-11-29 03:16:46.785570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:31.035 [2024-11-29 03:16:46.785577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.794233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.794264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:31.035 [2024-11-29 03:16:46.794273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.616 ms 00:29:31.035 [2024-11-29 03:16:46.794284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.794308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.794319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:31.035 [2024-11-29 03:16:46.794328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:31.035 [2024-11-29 03:16:46.794337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.794413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.794433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:31.035 [2024-11-29 03:16:46.794442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:29:31.035 [2024-11-29 03:16:46.794452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.794489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.794498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:31.035 [2024-11-29 03:16:46.794507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:31.035 [2024-11-29 03:16:46.794516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.800142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.800173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:31.035 [2024-11-29 03:16:46.800184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.602 ms 00:29:31.035 [2024-11-29 03:16:46.800193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.800273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.800288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:31.035 [2024-11-29 03:16:46.800300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:31.035 [2024-11-29 03:16:46.800311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.818750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.818809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:31.035 [2024-11-29 03:16:46.818852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.418 ms 00:29:31.035 [2024-11-29 03:16:46.818865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.820470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.820506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:31.035 [2024-11-29 03:16:46.820525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.393 ms 00:29:31.035 [2024-11-29 03:16:46.820537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.836240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.836279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:31.035 [2024-11-29 03:16:46.836290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.650 ms 00:29:31.035 [2024-11-29 03:16:46.836302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.836421] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:31.035 [2024-11-29 03:16:46.836503] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:31.035 [2024-11-29 03:16:46.836582] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:31.035 [2024-11-29 03:16:46.836662] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:31.035 [2024-11-29 03:16:46.836677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.836686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:31.035 [2024-11-29 03:16:46.836701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:29:31.035 [2024-11-29 03:16:46.836709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.836758] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:31.035 [2024-11-29 03:16:46.836769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.836777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:31.035 [2024-11-29 03:16:46.836785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:31.035 [2024-11-29 03:16:46.836793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.839738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.839773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:31.035 [2024-11-29 03:16:46.839784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.924 ms 00:29:31.035 [2024-11-29 03:16:46.839794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.840604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.840648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:31.035 [2024-11-29 03:16:46.840660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:31.035 [2024-11-29 03:16:46.840669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-29 03:16:46.840742] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:31.035 [2024-11-29 03:16:46.840901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.035 [2024-11-29 03:16:46.840914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:31.035 [2024-11-29 03:16:46.840927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.159 ms 00:29:31.035 [2024-11-29 03:16:46.840934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.609 [2024-11-29 03:16:47.504413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.609 [2024-11-29 03:16:47.504683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:31.609 [2024-11-29 03:16:47.504712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 663.170 ms 00:29:31.609 [2024-11-29 03:16:47.504724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.609 [2024-11-29 03:16:47.506626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.609 [2024-11-29 03:16:47.506694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:31.609 [2024-11-29 03:16:47.506707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.351 ms 00:29:31.609 [2024-11-29 03:16:47.506717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.609 [2024-11-29 03:16:47.507480] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:31.609 [2024-11-29 03:16:47.507535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.609 [2024-11-29 03:16:47.507546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:31.609 [2024-11-29 03:16:47.507558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.780 ms 00:29:31.609 [2024-11-29 03:16:47.507568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.609 [2024-11-29 03:16:47.507609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.609 [2024-11-29 03:16:47.507629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:31.609 [2024-11-29 03:16:47.507639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:31.609 [2024-11-29 03:16:47.507649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.609 [2024-11-29 03:16:47.507686] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 666.939 ms, result 0 00:29:31.609 [2024-11-29 03:16:47.507743] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:31.609 [2024-11-29 03:16:47.508055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.609 [2024-11-29 03:16:47.508105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:31.609 [2024-11-29 03:16:47.508273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.312 ms 00:29:31.609 [2024-11-29 03:16:47.508302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.180 [2024-11-29 03:16:48.147993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.180 [2024-11-29 03:16:48.148325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:32.180 [2024-11-29 03:16:48.148354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 639.036 ms 00:29:32.180 [2024-11-29 03:16:48.148365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.180 [2024-11-29 03:16:48.150372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.180 [2024-11-29 03:16:48.150424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:32.180 [2024-11-29 03:16:48.150436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.442 ms 00:29:32.180 [2024-11-29 03:16:48.150444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.180 [2024-11-29 03:16:48.150935] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:32.180 [2024-11-29 03:16:48.150968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.180 [2024-11-29 03:16:48.150977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:32.180 [2024-11-29 03:16:48.150988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.489 ms 00:29:32.181 [2024-11-29 03:16:48.150998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.181 [2024-11-29 03:16:48.151036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.181 [2024-11-29 03:16:48.151047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:32.181 [2024-11-29 03:16:48.151056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:32.181 [2024-11-29 03:16:48.151065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.181 [2024-11-29 03:16:48.151106] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 643.367 ms, result 0 00:29:32.181 [2024-11-29 03:16:48.151153] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:32.181 [2024-11-29 03:16:48.151167] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:32.181 [2024-11-29 03:16:48.151177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.181 [2024-11-29 03:16:48.151186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:32.181 [2024-11-29 03:16:48.151195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1310.448 ms 00:29:32.181 [2024-11-29 03:16:48.151208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.181 [2024-11-29 03:16:48.151239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.181 [2024-11-29 03:16:48.151250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:32.181 [2024-11-29 03:16:48.151259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:32.181 [2024-11-29 03:16:48.151267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.181 [2024-11-29 03:16:48.161066] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:32.181 [2024-11-29 03:16:48.161325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.181 [2024-11-29 03:16:48.161367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:32.181 [2024-11-29 03:16:48.161453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.041 ms 00:29:32.181 [2024-11-29 03:16:48.161479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.181 [2024-11-29 03:16:48.162262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.181 [2024-11-29 03:16:48.162419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:32.181 [2024-11-29 03:16:48.162494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.676 ms 00:29:32.181 [2024-11-29 03:16:48.162521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.181 [2024-11-29 03:16:48.164783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.181 [2024-11-29 03:16:48.164942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:32.181 [2024-11-29 03:16:48.165025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.220 ms 00:29:32.181 [2024-11-29 03:16:48.165048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.181 [2024-11-29 03:16:48.165144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.181 [2024-11-29 03:16:48.165172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:32.181 [2024-11-29 03:16:48.165193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:32.181 [2024-11-29 03:16:48.165212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.181 [2024-11-29 03:16:48.165336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.181 [2024-11-29 03:16:48.165438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:32.181 [2024-11-29 03:16:48.165471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:29:32.181 [2024-11-29 03:16:48.165490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.181 [2024-11-29 03:16:48.165530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.181 [2024-11-29 03:16:48.165551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:32.181 [2024-11-29 03:16:48.165572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:32.181 [2024-11-29 03:16:48.165596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.181 [2024-11-29 03:16:48.165695] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:32.181 [2024-11-29 03:16:48.165727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.181 [2024-11-29 03:16:48.165748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:32.181 [2024-11-29 03:16:48.165822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:29:32.181 [2024-11-29 03:16:48.165880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.181 [2024-11-29 03:16:48.166007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.181 [2024-11-29 03:16:48.166035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:32.181 [2024-11-29 03:16:48.166174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:29:32.181 [2024-11-29 03:16:48.166204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.181 [2024-11-29 03:16:48.167364] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1396.467 ms, result 0 00:29:32.442 [2024-11-29 03:16:48.182008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.442 [2024-11-29 03:16:48.198025] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:32.442 [2024-11-29 03:16:48.206163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:33.013 Validate MD5 checksum, iteration 1 00:29:33.013 03:16:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.013 03:16:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:33.013 03:16:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:33.013 03:16:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:33.013 03:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:33.013 03:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:33.013 03:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:33.013 03:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:33.013 03:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:33.013 03:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:33.013 03:16:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:33.013 03:16:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:33.013 03:16:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:33.013 03:16:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:33.013 03:16:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:33.013 [2024-11-29 03:16:48.815698] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:33.013 [2024-11-29 03:16:48.817957] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94535 ] 00:29:33.013 [2024-11-29 03:16:48.972019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.013 [2024-11-29 03:16:49.000575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.397  [2024-11-29T03:16:51.328Z] Copying: 548/1024 [MB] (548 MBps) [2024-11-29T03:16:51.900Z] Copying: 1024/1024 [MB] (average 581 MBps) 00:29:35.908 00:29:35.908 03:16:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:35.908 03:16:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:37.822 03:16:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:37.822 Validate MD5 checksum, iteration 2 00:29:37.822 03:16:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=75c1f5c7a17aca89aa0e14936e4b6744 00:29:37.822 03:16:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 75c1f5c7a17aca89aa0e14936e4b6744 != \7\5\c\1\f\5\c\7\a\1\7\a\c\a\8\9\a\a\0\e\1\4\9\3\6\e\4\b\6\7\4\4 ]] 00:29:37.822 03:16:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:37.822 03:16:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:37.822 03:16:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:37.822 03:16:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:37.822 03:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:37.822 03:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:37.822 03:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:37.822 03:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:37.822 03:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:38.080 [2024-11-29 03:16:53.836225] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:38.080 [2024-11-29 03:16:53.836335] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94591 ] 00:29:38.080 [2024-11-29 03:16:53.983344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.080 [2024-11-29 03:16:54.001265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.464  [2024-11-29T03:16:56.398Z] Copying: 648/1024 [MB] (648 MBps) [2024-11-29T03:16:56.659Z] Copying: 1024/1024 [MB] (average 597 MBps) 00:29:40.667 00:29:40.667 03:16:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:40.667 03:16:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4ffaa7638e45eef5ba8fcb7675040ce0 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4ffaa7638e45eef5ba8fcb7675040ce0 != \4\f\f\a\a\7\6\3\8\e\4\5\e\e\f\5\b\a\8\f\c\b\7\6\7\5\0\4\0\c\e\0 ]] 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94500 ]] 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94500 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94500 ']' 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94500 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94500 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:43.214 killing process with pid 94500 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94500' 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94500 00:29:43.214 03:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94500 00:29:43.214 [2024-11-29 03:16:58.917376] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:43.214 [2024-11-29 03:16:58.920103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.214 [2024-11-29 03:16:58.920263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:43.214 [2024-11-29 03:16:58.920280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:43.214 [2024-11-29 03:16:58.920287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.920310] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:43.215 [2024-11-29 03:16:58.920682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.215 [2024-11-29 03:16:58.920696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:43.215 [2024-11-29 03:16:58.920707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.361 ms 00:29:43.215 [2024-11-29 03:16:58.920713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.921021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.215 [2024-11-29 03:16:58.921048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:43.215 [2024-11-29 03:16:58.921065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.291 ms 00:29:43.215 [2024-11-29 03:16:58.921080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.922155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.215 [2024-11-29 03:16:58.922249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:43.215 [2024-11-29 03:16:58.922293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.054 ms 00:29:43.215 [2024-11-29 03:16:58.922314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.923253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.215 [2024-11-29 03:16:58.923320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:43.215 [2024-11-29 03:16:58.923359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.905 ms 00:29:43.215 [2024-11-29 03:16:58.923377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.925202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.215 [2024-11-29 03:16:58.925286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:43.215 [2024-11-29 03:16:58.925330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.786 ms 00:29:43.215 [2024-11-29 03:16:58.925347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.926912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.215 [2024-11-29 03:16:58.926996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:43.215 [2024-11-29 03:16:58.927008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.528 ms 00:29:43.215 [2024-11-29 03:16:58.927015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.927073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.215 [2024-11-29 03:16:58.927081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:43.215 [2024-11-29 03:16:58.927087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:29:43.215 [2024-11-29 03:16:58.927097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.928528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.215 [2024-11-29 03:16:58.928617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:43.215 [2024-11-29 03:16:58.928656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.418 ms 00:29:43.215 [2024-11-29 03:16:58.928673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.929821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.215 [2024-11-29 03:16:58.929953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:43.215 [2024-11-29 03:16:58.930033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.116 ms 00:29:43.215 [2024-11-29 03:16:58.930051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.931050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.215 [2024-11-29 03:16:58.931134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:43.215 [2024-11-29 03:16:58.931174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.966 ms 00:29:43.215 [2024-11-29 03:16:58.931190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.932338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.215 [2024-11-29 03:16:58.932417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:43.215 [2024-11-29 03:16:58.932485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.005 ms 00:29:43.215 [2024-11-29 03:16:58.932502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.932570] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:43.215 [2024-11-29 03:16:58.932598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:43.215 [2024-11-29 03:16:58.932623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:43.215 [2024-11-29 03:16:58.932646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:43.215 [2024-11-29 03:16:58.932669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:43.215 [2024-11-29 03:16:58.932743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:43.215 [2024-11-29 03:16:58.932794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:43.215 [2024-11-29 03:16:58.932819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:43.215 [2024-11-29 03:16:58.932853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:43.215 [2024-11-29 03:16:58.932875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:43.215 [2024-11-29 03:16:58.932973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:43.215 [2024-11-29 03:16:58.932982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:43.215 [2024-11-29 03:16:58.932988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:43.215 [2024-11-29 03:16:58.932994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:43.215 [2024-11-29 03:16:58.933000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:43.215 [2024-11-29 03:16:58.933007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:43.215 [2024-11-29 03:16:58.933013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:43.215 [2024-11-29 03:16:58.933019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:43.215 [2024-11-29 03:16:58.933025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:43.215 [2024-11-29 03:16:58.933032] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:43.215 [2024-11-29 03:16:58.933038] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: ad7a6e26-5f99-4564-b944-47558ba162fc 00:29:43.215 [2024-11-29 03:16:58.933045] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:43.215 [2024-11-29 03:16:58.933050] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:43.215 [2024-11-29 03:16:58.933056] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:43.215 [2024-11-29 03:16:58.933062] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:43.215 [2024-11-29 03:16:58.933067] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:43.215 [2024-11-29 03:16:58.933072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:43.215 [2024-11-29 03:16:58.933082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:43.215 [2024-11-29 03:16:58.933087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:43.215 [2024-11-29 03:16:58.933092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:43.215 [2024-11-29 03:16:58.933098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.215 [2024-11-29 03:16:58.933104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:43.215 [2024-11-29 03:16:58.933111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 00:29:43.215 [2024-11-29 03:16:58.933116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.934353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.215 [2024-11-29 03:16:58.934371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:43.215 [2024-11-29 03:16:58.934379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.221 ms 00:29:43.215 [2024-11-29 03:16:58.934390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.934461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.215 [2024-11-29 03:16:58.934468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:43.215 [2024-11-29 03:16:58.934475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:43.215 [2024-11-29 03:16:58.934481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.939153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.215 [2024-11-29 03:16:58.939238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:43.215 [2024-11-29 03:16:58.939284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.215 [2024-11-29 03:16:58.939304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.939336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.215 [2024-11-29 03:16:58.939352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:43.215 [2024-11-29 03:16:58.939366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.215 [2024-11-29 03:16:58.939381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.939445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.215 [2024-11-29 03:16:58.939550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:43.215 [2024-11-29 03:16:58.939568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.215 [2024-11-29 03:16:58.939582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.939610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.215 [2024-11-29 03:16:58.939626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:43.215 [2024-11-29 03:16:58.939641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.215 [2024-11-29 03:16:58.939656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.947639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.215 [2024-11-29 03:16:58.947750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:43.215 [2024-11-29 03:16:58.947790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.215 [2024-11-29 03:16:58.947808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.215 [2024-11-29 03:16:58.954050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.216 [2024-11-29 03:16:58.954159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:43.216 [2024-11-29 03:16:58.954197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.216 [2024-11-29 03:16:58.954214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.216 [2024-11-29 03:16:58.954259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.216 [2024-11-29 03:16:58.954277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:43.216 [2024-11-29 03:16:58.954292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.216 [2024-11-29 03:16:58.954307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.216 [2024-11-29 03:16:58.954361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.216 [2024-11-29 03:16:58.954381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:43.216 [2024-11-29 03:16:58.954397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.216 [2024-11-29 03:16:58.954436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.216 [2024-11-29 03:16:58.954510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.216 [2024-11-29 03:16:58.954529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:43.216 [2024-11-29 03:16:58.954544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.216 [2024-11-29 03:16:58.954559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.216 [2024-11-29 03:16:58.954595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.216 [2024-11-29 03:16:58.954614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:43.216 [2024-11-29 03:16:58.954668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.216 [2024-11-29 03:16:58.954685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.216 [2024-11-29 03:16:58.954725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.216 [2024-11-29 03:16:58.954742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:43.216 [2024-11-29 03:16:58.954757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.216 [2024-11-29 03:16:58.954770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.216 [2024-11-29 03:16:58.954812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.216 [2024-11-29 03:16:58.954851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:43.216 [2024-11-29 03:16:58.954957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.216 [2024-11-29 03:16:58.954974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.216 [2024-11-29 03:16:58.955087] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 34.955 ms, result 0 00:29:43.216 03:16:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:43.216 03:16:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:43.216 03:16:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:43.216 03:16:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:43.216 03:16:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:43.216 03:16:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:43.216 03:16:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:43.216 03:16:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:43.216 Remove shared memory files 00:29:43.216 03:16:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:43.216 03:16:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:43.216 03:16:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid94301 00:29:43.216 03:16:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:43.216 03:16:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:43.216 ************************************ 00:29:43.216 END TEST ftl_upgrade_shutdown 00:29:43.216 ************************************ 00:29:43.216 00:29:43.216 real 1m15.559s 00:29:43.216 user 1m38.967s 00:29:43.216 sys 0m19.979s 00:29:43.216 03:16:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:43.216 03:16:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:43.216 03:16:59 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:43.216 03:16:59 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:43.216 03:16:59 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:29:43.216 03:16:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:43.216 03:16:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:43.216 ************************************ 00:29:43.216 START TEST ftl_restore_fast 00:29:43.216 ************************************ 00:29:43.216 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:43.478 * Looking for test storage... 00:29:43.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:29:43.478 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:43.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.479 --rc genhtml_branch_coverage=1 00:29:43.479 --rc genhtml_function_coverage=1 00:29:43.479 --rc genhtml_legend=1 00:29:43.479 --rc geninfo_all_blocks=1 00:29:43.479 --rc geninfo_unexecuted_blocks=1 00:29:43.479 00:29:43.479 ' 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:43.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.479 --rc genhtml_branch_coverage=1 00:29:43.479 --rc genhtml_function_coverage=1 00:29:43.479 --rc genhtml_legend=1 00:29:43.479 --rc geninfo_all_blocks=1 00:29:43.479 --rc geninfo_unexecuted_blocks=1 00:29:43.479 00:29:43.479 ' 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:43.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.479 --rc genhtml_branch_coverage=1 00:29:43.479 --rc genhtml_function_coverage=1 00:29:43.479 --rc genhtml_legend=1 00:29:43.479 --rc geninfo_all_blocks=1 00:29:43.479 --rc geninfo_unexecuted_blocks=1 00:29:43.479 00:29:43.479 ' 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:43.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.479 --rc genhtml_branch_coverage=1 00:29:43.479 --rc genhtml_function_coverage=1 00:29:43.479 --rc genhtml_legend=1 00:29:43.479 --rc geninfo_all_blocks=1 00:29:43.479 --rc geninfo_unexecuted_blocks=1 00:29:43.479 00:29:43.479 ' 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.20l3u7wMra 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=94726 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 94726 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 94726 ']' 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:43.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:43.479 03:16:59 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:43.479 [2024-11-29 03:16:59.432768] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:43.479 [2024-11-29 03:16:59.433181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94726 ] 00:29:43.739 [2024-11-29 03:16:59.578537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.739 [2024-11-29 03:16:59.601727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.306 03:17:00 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.306 03:17:00 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:29:44.306 03:17:00 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:44.306 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:44.306 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:44.306 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:44.306 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:44.306 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:44.566 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:44.566 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:44.566 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:44.566 03:17:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:44.566 03:17:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:44.566 03:17:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:44.566 03:17:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:44.566 03:17:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:44.843 03:17:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:44.843 { 00:29:44.843 "name": "nvme0n1", 00:29:44.843 "aliases": [ 00:29:44.843 "dd319e32-46e5-4bc9-a3e6-68260208cd68" 00:29:44.843 ], 00:29:44.843 "product_name": "NVMe disk", 00:29:44.843 "block_size": 4096, 00:29:44.843 "num_blocks": 1310720, 00:29:44.843 "uuid": "dd319e32-46e5-4bc9-a3e6-68260208cd68", 00:29:44.843 "numa_id": -1, 00:29:44.843 "assigned_rate_limits": { 00:29:44.843 "rw_ios_per_sec": 0, 00:29:44.843 "rw_mbytes_per_sec": 0, 00:29:44.843 "r_mbytes_per_sec": 0, 00:29:44.843 "w_mbytes_per_sec": 0 00:29:44.843 }, 00:29:44.843 "claimed": true, 00:29:44.843 "claim_type": "read_many_write_one", 00:29:44.843 "zoned": false, 00:29:44.843 "supported_io_types": { 00:29:44.843 "read": true, 00:29:44.843 "write": true, 00:29:44.843 "unmap": true, 00:29:44.843 "flush": true, 00:29:44.843 "reset": true, 00:29:44.843 "nvme_admin": true, 00:29:44.843 "nvme_io": true, 00:29:44.843 "nvme_io_md": false, 00:29:44.843 "write_zeroes": true, 00:29:44.843 "zcopy": false, 00:29:44.843 "get_zone_info": false, 00:29:44.843 "zone_management": false, 00:29:44.843 "zone_append": false, 00:29:44.843 "compare": true, 00:29:44.843 "compare_and_write": false, 00:29:44.843 "abort": true, 00:29:44.843 "seek_hole": false, 00:29:44.843 "seek_data": false, 00:29:44.843 "copy": true, 00:29:44.843 "nvme_iov_md": false 00:29:44.843 }, 00:29:44.843 "driver_specific": { 00:29:44.843 "nvme": [ 00:29:44.843 { 00:29:44.843 "pci_address": "0000:00:11.0", 00:29:44.843 "trid": { 00:29:44.843 "trtype": "PCIe", 00:29:44.843 "traddr": "0000:00:11.0" 00:29:44.843 }, 00:29:44.843 "ctrlr_data": { 00:29:44.843 "cntlid": 0, 00:29:44.843 "vendor_id": "0x1b36", 00:29:44.843 "model_number": "QEMU NVMe Ctrl", 00:29:44.843 "serial_number": "12341", 00:29:44.843 "firmware_revision": "8.0.0", 00:29:44.843 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:44.843 "oacs": { 00:29:44.843 "security": 0, 00:29:44.843 "format": 1, 00:29:44.843 "firmware": 0, 00:29:44.843 "ns_manage": 1 00:29:44.843 }, 00:29:44.843 "multi_ctrlr": false, 00:29:44.843 "ana_reporting": false 00:29:44.843 }, 00:29:44.843 "vs": { 00:29:44.843 "nvme_version": "1.4" 00:29:44.843 }, 00:29:44.843 "ns_data": { 00:29:44.843 "id": 1, 00:29:44.843 "can_share": false 00:29:44.843 } 00:29:44.843 } 00:29:44.843 ], 00:29:44.843 "mp_policy": "active_passive" 00:29:44.843 } 00:29:44.843 } 00:29:44.843 ]' 00:29:44.843 03:17:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:44.843 03:17:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:44.843 03:17:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:44.843 03:17:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:44.843 03:17:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:44.843 03:17:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:29:44.843 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:44.843 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:44.844 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:44.844 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:44.844 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:45.113 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=43b0d910-b2b1-4230-9902-a0d6d7b3de83 00:29:45.113 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:45.114 03:17:00 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 43b0d910-b2b1-4230-9902-a0d6d7b3de83 00:29:45.372 03:17:01 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:45.631 03:17:01 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=5f4136df-cb44-494d-9a90-f61ba81d6da6 00:29:45.631 03:17:01 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5f4136df-cb44-494d-9a90-f61ba81d6da6 00:29:45.631 03:17:01 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=e54c58c4-7637-4d45-ba4b-4ba74a64c34b 00:29:45.631 03:17:01 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:45.631 03:17:01 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e54c58c4-7637-4d45-ba4b-4ba74a64c34b 00:29:45.631 03:17:01 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:45.631 03:17:01 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:45.631 03:17:01 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=e54c58c4-7637-4d45-ba4b-4ba74a64c34b 00:29:45.631 03:17:01 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:45.631 03:17:01 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size e54c58c4-7637-4d45-ba4b-4ba74a64c34b 00:29:45.631 03:17:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e54c58c4-7637-4d45-ba4b-4ba74a64c34b 00:29:45.631 03:17:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:45.631 03:17:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:45.631 03:17:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:45.631 03:17:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e54c58c4-7637-4d45-ba4b-4ba74a64c34b 00:29:45.891 03:17:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:45.891 { 00:29:45.891 "name": "e54c58c4-7637-4d45-ba4b-4ba74a64c34b", 00:29:45.891 "aliases": [ 00:29:45.891 "lvs/nvme0n1p0" 00:29:45.891 ], 00:29:45.891 "product_name": "Logical Volume", 00:29:45.891 "block_size": 4096, 00:29:45.891 "num_blocks": 26476544, 00:29:45.891 "uuid": "e54c58c4-7637-4d45-ba4b-4ba74a64c34b", 00:29:45.891 "assigned_rate_limits": { 00:29:45.891 "rw_ios_per_sec": 0, 00:29:45.891 "rw_mbytes_per_sec": 0, 00:29:45.891 "r_mbytes_per_sec": 0, 00:29:45.891 "w_mbytes_per_sec": 0 00:29:45.891 }, 00:29:45.891 "claimed": false, 00:29:45.891 "zoned": false, 00:29:45.891 "supported_io_types": { 00:29:45.891 "read": true, 00:29:45.891 "write": true, 00:29:45.891 "unmap": true, 00:29:45.891 "flush": false, 00:29:45.891 "reset": true, 00:29:45.891 "nvme_admin": false, 00:29:45.891 "nvme_io": false, 00:29:45.891 "nvme_io_md": false, 00:29:45.891 "write_zeroes": true, 00:29:45.891 "zcopy": false, 00:29:45.891 "get_zone_info": false, 00:29:45.891 "zone_management": false, 00:29:45.891 "zone_append": false, 00:29:45.891 "compare": false, 00:29:45.891 "compare_and_write": false, 00:29:45.891 "abort": false, 00:29:45.891 "seek_hole": true, 00:29:45.891 "seek_data": true, 00:29:45.891 "copy": false, 00:29:45.891 "nvme_iov_md": false 00:29:45.891 }, 00:29:45.891 "driver_specific": { 00:29:45.891 "lvol": { 00:29:45.891 "lvol_store_uuid": "5f4136df-cb44-494d-9a90-f61ba81d6da6", 00:29:45.891 "base_bdev": "nvme0n1", 00:29:45.891 "thin_provision": true, 00:29:45.891 "num_allocated_clusters": 0, 00:29:45.891 "snapshot": false, 00:29:45.891 "clone": false, 00:29:45.891 "esnap_clone": false 00:29:45.891 } 00:29:45.891 } 00:29:45.891 } 00:29:45.891 ]' 00:29:45.891 03:17:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:45.891 03:17:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:45.891 03:17:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:45.891 03:17:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:45.891 03:17:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:45.891 03:17:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:45.891 03:17:01 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:45.891 03:17:01 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:45.891 03:17:01 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:46.150 03:17:02 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:46.150 03:17:02 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:46.150 03:17:02 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size e54c58c4-7637-4d45-ba4b-4ba74a64c34b 00:29:46.150 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e54c58c4-7637-4d45-ba4b-4ba74a64c34b 00:29:46.150 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:46.150 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:46.150 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:46.150 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e54c58c4-7637-4d45-ba4b-4ba74a64c34b 00:29:46.409 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:46.409 { 00:29:46.409 "name": "e54c58c4-7637-4d45-ba4b-4ba74a64c34b", 00:29:46.409 "aliases": [ 00:29:46.409 "lvs/nvme0n1p0" 00:29:46.409 ], 00:29:46.409 "product_name": "Logical Volume", 00:29:46.409 "block_size": 4096, 00:29:46.409 "num_blocks": 26476544, 00:29:46.409 "uuid": "e54c58c4-7637-4d45-ba4b-4ba74a64c34b", 00:29:46.409 "assigned_rate_limits": { 00:29:46.409 "rw_ios_per_sec": 0, 00:29:46.409 "rw_mbytes_per_sec": 0, 00:29:46.409 "r_mbytes_per_sec": 0, 00:29:46.409 "w_mbytes_per_sec": 0 00:29:46.409 }, 00:29:46.409 "claimed": false, 00:29:46.409 "zoned": false, 00:29:46.409 "supported_io_types": { 00:29:46.409 "read": true, 00:29:46.409 "write": true, 00:29:46.409 "unmap": true, 00:29:46.409 "flush": false, 00:29:46.409 "reset": true, 00:29:46.409 "nvme_admin": false, 00:29:46.409 "nvme_io": false, 00:29:46.409 "nvme_io_md": false, 00:29:46.409 "write_zeroes": true, 00:29:46.409 "zcopy": false, 00:29:46.409 "get_zone_info": false, 00:29:46.409 "zone_management": false, 00:29:46.409 "zone_append": false, 00:29:46.409 "compare": false, 00:29:46.409 "compare_and_write": false, 00:29:46.409 "abort": false, 00:29:46.409 "seek_hole": true, 00:29:46.409 "seek_data": true, 00:29:46.409 "copy": false, 00:29:46.409 "nvme_iov_md": false 00:29:46.409 }, 00:29:46.409 "driver_specific": { 00:29:46.409 "lvol": { 00:29:46.409 "lvol_store_uuid": "5f4136df-cb44-494d-9a90-f61ba81d6da6", 00:29:46.409 "base_bdev": "nvme0n1", 00:29:46.409 "thin_provision": true, 00:29:46.409 "num_allocated_clusters": 0, 00:29:46.409 "snapshot": false, 00:29:46.409 "clone": false, 00:29:46.409 "esnap_clone": false 00:29:46.409 } 00:29:46.409 } 00:29:46.409 } 00:29:46.409 ]' 00:29:46.409 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:46.409 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:46.409 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:46.409 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:46.409 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:46.409 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:46.409 03:17:02 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:46.409 03:17:02 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:46.668 03:17:02 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:46.668 03:17:02 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size e54c58c4-7637-4d45-ba4b-4ba74a64c34b 00:29:46.668 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e54c58c4-7637-4d45-ba4b-4ba74a64c34b 00:29:46.668 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:46.668 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:46.668 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:46.668 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e54c58c4-7637-4d45-ba4b-4ba74a64c34b 00:29:46.927 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:46.927 { 00:29:46.927 "name": "e54c58c4-7637-4d45-ba4b-4ba74a64c34b", 00:29:46.927 "aliases": [ 00:29:46.927 "lvs/nvme0n1p0" 00:29:46.927 ], 00:29:46.927 "product_name": "Logical Volume", 00:29:46.927 "block_size": 4096, 00:29:46.927 "num_blocks": 26476544, 00:29:46.927 "uuid": "e54c58c4-7637-4d45-ba4b-4ba74a64c34b", 00:29:46.927 "assigned_rate_limits": { 00:29:46.927 "rw_ios_per_sec": 0, 00:29:46.927 "rw_mbytes_per_sec": 0, 00:29:46.927 "r_mbytes_per_sec": 0, 00:29:46.927 "w_mbytes_per_sec": 0 00:29:46.927 }, 00:29:46.927 "claimed": false, 00:29:46.927 "zoned": false, 00:29:46.927 "supported_io_types": { 00:29:46.927 "read": true, 00:29:46.927 "write": true, 00:29:46.927 "unmap": true, 00:29:46.927 "flush": false, 00:29:46.927 "reset": true, 00:29:46.927 "nvme_admin": false, 00:29:46.927 "nvme_io": false, 00:29:46.927 "nvme_io_md": false, 00:29:46.927 "write_zeroes": true, 00:29:46.927 "zcopy": false, 00:29:46.927 "get_zone_info": false, 00:29:46.927 "zone_management": false, 00:29:46.927 "zone_append": false, 00:29:46.927 "compare": false, 00:29:46.927 "compare_and_write": false, 00:29:46.927 "abort": false, 00:29:46.927 "seek_hole": true, 00:29:46.927 "seek_data": true, 00:29:46.927 "copy": false, 00:29:46.927 "nvme_iov_md": false 00:29:46.927 }, 00:29:46.927 "driver_specific": { 00:29:46.927 "lvol": { 00:29:46.927 "lvol_store_uuid": "5f4136df-cb44-494d-9a90-f61ba81d6da6", 00:29:46.927 "base_bdev": "nvme0n1", 00:29:46.927 "thin_provision": true, 00:29:46.927 "num_allocated_clusters": 0, 00:29:46.927 "snapshot": false, 00:29:46.927 "clone": false, 00:29:46.927 "esnap_clone": false 00:29:46.927 } 00:29:46.927 } 00:29:46.927 } 00:29:46.927 ]' 00:29:46.927 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:46.927 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:46.927 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:46.927 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:46.927 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:46.927 03:17:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:46.927 03:17:02 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:46.927 03:17:02 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e54c58c4-7637-4d45-ba4b-4ba74a64c34b --l2p_dram_limit 10' 00:29:46.927 03:17:02 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:46.927 03:17:02 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:46.927 03:17:02 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:46.927 03:17:02 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:46.927 03:17:02 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:46.927 03:17:02 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e54c58c4-7637-4d45-ba4b-4ba74a64c34b --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:47.280 [2024-11-29 03:17:03.044121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.280 [2024-11-29 03:17:03.044167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:47.280 [2024-11-29 03:17:03.044178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:47.280 [2024-11-29 03:17:03.044186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.280 [2024-11-29 03:17:03.044230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.280 [2024-11-29 03:17:03.044241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:47.280 [2024-11-29 03:17:03.044247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:47.280 [2024-11-29 03:17:03.044256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.280 [2024-11-29 03:17:03.044271] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:47.280 [2024-11-29 03:17:03.044499] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:47.280 [2024-11-29 03:17:03.044514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.280 [2024-11-29 03:17:03.044523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:47.280 [2024-11-29 03:17:03.044535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:29:47.280 [2024-11-29 03:17:03.044543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.280 [2024-11-29 03:17:03.044569] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 93a08837-8e91-4216-aa7c-0f9d19487196 00:29:47.280 [2024-11-29 03:17:03.045564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.280 [2024-11-29 03:17:03.045589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:47.280 [2024-11-29 03:17:03.045600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:47.280 [2024-11-29 03:17:03.045609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.280 [2024-11-29 03:17:03.050312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.280 [2024-11-29 03:17:03.050339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:47.280 [2024-11-29 03:17:03.050348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.644 ms 00:29:47.280 [2024-11-29 03:17:03.050354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.280 [2024-11-29 03:17:03.050418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.280 [2024-11-29 03:17:03.050425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:47.280 [2024-11-29 03:17:03.050432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:47.280 [2024-11-29 03:17:03.050437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.280 [2024-11-29 03:17:03.050481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.280 [2024-11-29 03:17:03.050489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:47.280 [2024-11-29 03:17:03.050496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:47.280 [2024-11-29 03:17:03.050502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.280 [2024-11-29 03:17:03.050521] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:47.280 [2024-11-29 03:17:03.051800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.280 [2024-11-29 03:17:03.051836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:47.280 [2024-11-29 03:17:03.051847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.286 ms 00:29:47.280 [2024-11-29 03:17:03.051854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.280 [2024-11-29 03:17:03.051879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.280 [2024-11-29 03:17:03.051887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:47.280 [2024-11-29 03:17:03.051894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:47.280 [2024-11-29 03:17:03.051904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.280 [2024-11-29 03:17:03.051917] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:47.280 [2024-11-29 03:17:03.052027] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:47.280 [2024-11-29 03:17:03.052038] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:47.280 [2024-11-29 03:17:03.052048] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:47.280 [2024-11-29 03:17:03.052057] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:47.280 [2024-11-29 03:17:03.052069] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:47.281 [2024-11-29 03:17:03.052076] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:47.281 [2024-11-29 03:17:03.052084] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:47.281 [2024-11-29 03:17:03.052090] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:47.281 [2024-11-29 03:17:03.052097] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:47.281 [2024-11-29 03:17:03.052102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.281 [2024-11-29 03:17:03.052110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:47.281 [2024-11-29 03:17:03.052119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:29:47.281 [2024-11-29 03:17:03.052129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.281 [2024-11-29 03:17:03.052192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.281 [2024-11-29 03:17:03.052202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:47.281 [2024-11-29 03:17:03.052208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:47.281 [2024-11-29 03:17:03.052216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.281 [2024-11-29 03:17:03.052287] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:47.281 [2024-11-29 03:17:03.052296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:47.281 [2024-11-29 03:17:03.052303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:47.281 [2024-11-29 03:17:03.052310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.281 [2024-11-29 03:17:03.052316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:47.281 [2024-11-29 03:17:03.052322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:47.281 [2024-11-29 03:17:03.052328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:47.281 [2024-11-29 03:17:03.052335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:47.281 [2024-11-29 03:17:03.052341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:47.281 [2024-11-29 03:17:03.052348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:47.281 [2024-11-29 03:17:03.052353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:47.281 [2024-11-29 03:17:03.052360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:47.281 [2024-11-29 03:17:03.052365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:47.281 [2024-11-29 03:17:03.052374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:47.281 [2024-11-29 03:17:03.052380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:47.281 [2024-11-29 03:17:03.052386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.281 [2024-11-29 03:17:03.052391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:47.281 [2024-11-29 03:17:03.052398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:47.281 [2024-11-29 03:17:03.052403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.281 [2024-11-29 03:17:03.052410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:47.281 [2024-11-29 03:17:03.052415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:47.281 [2024-11-29 03:17:03.052421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.281 [2024-11-29 03:17:03.052426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:47.281 [2024-11-29 03:17:03.052433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:47.281 [2024-11-29 03:17:03.052438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.281 [2024-11-29 03:17:03.052444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:47.281 [2024-11-29 03:17:03.052450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:47.281 [2024-11-29 03:17:03.052457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.281 [2024-11-29 03:17:03.052463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:47.281 [2024-11-29 03:17:03.052472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:47.281 [2024-11-29 03:17:03.052478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.281 [2024-11-29 03:17:03.052486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:47.281 [2024-11-29 03:17:03.052491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:47.281 [2024-11-29 03:17:03.052500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:47.281 [2024-11-29 03:17:03.052506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:47.281 [2024-11-29 03:17:03.052513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:47.281 [2024-11-29 03:17:03.052518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:47.281 [2024-11-29 03:17:03.052526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:47.281 [2024-11-29 03:17:03.052533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:47.281 [2024-11-29 03:17:03.052540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.281 [2024-11-29 03:17:03.052546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:47.281 [2024-11-29 03:17:03.052553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:47.281 [2024-11-29 03:17:03.052558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.281 [2024-11-29 03:17:03.052565] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:47.281 [2024-11-29 03:17:03.052578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:47.281 [2024-11-29 03:17:03.052589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:47.281 [2024-11-29 03:17:03.052595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.281 [2024-11-29 03:17:03.052606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:47.281 [2024-11-29 03:17:03.052613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:47.281 [2024-11-29 03:17:03.052621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:47.281 [2024-11-29 03:17:03.052627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:47.281 [2024-11-29 03:17:03.052634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:47.281 [2024-11-29 03:17:03.052640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:47.281 [2024-11-29 03:17:03.052651] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:47.281 [2024-11-29 03:17:03.052659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:47.281 [2024-11-29 03:17:03.052668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:47.281 [2024-11-29 03:17:03.052675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:47.281 [2024-11-29 03:17:03.052682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:47.281 [2024-11-29 03:17:03.052689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:47.281 [2024-11-29 03:17:03.052696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:47.281 [2024-11-29 03:17:03.052702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:47.281 [2024-11-29 03:17:03.052711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:47.281 [2024-11-29 03:17:03.052717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:47.281 [2024-11-29 03:17:03.052725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:47.281 [2024-11-29 03:17:03.052733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:47.281 [2024-11-29 03:17:03.052741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:47.281 [2024-11-29 03:17:03.052747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:47.281 [2024-11-29 03:17:03.052754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:47.281 [2024-11-29 03:17:03.052761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:47.281 [2024-11-29 03:17:03.052769] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:47.281 [2024-11-29 03:17:03.052775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:47.281 [2024-11-29 03:17:03.052784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:47.281 [2024-11-29 03:17:03.052790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:47.281 [2024-11-29 03:17:03.052798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:47.281 [2024-11-29 03:17:03.052805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:47.281 [2024-11-29 03:17:03.052812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.281 [2024-11-29 03:17:03.052819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:47.281 [2024-11-29 03:17:03.052839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:29:47.281 [2024-11-29 03:17:03.052846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.281 [2024-11-29 03:17:03.052884] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:47.281 [2024-11-29 03:17:03.052892] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:51.492 [2024-11-29 03:17:06.633758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.634070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:51.492 [2024-11-29 03:17:06.634156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3580.852 ms 00:29:51.492 [2024-11-29 03:17:06.634184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.647825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.648040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:51.492 [2024-11-29 03:17:06.648175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.509 ms 00:29:51.492 [2024-11-29 03:17:06.648206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.648355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.648472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:51.492 [2024-11-29 03:17:06.648502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:29:51.492 [2024-11-29 03:17:06.648524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.661208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.661389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:51.492 [2024-11-29 03:17:06.661466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.612 ms 00:29:51.492 [2024-11-29 03:17:06.661494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.661543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.661565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:51.492 [2024-11-29 03:17:06.661589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:51.492 [2024-11-29 03:17:06.661610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.662241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.662393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:51.492 [2024-11-29 03:17:06.662486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:29:51.492 [2024-11-29 03:17:06.662512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.662659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.662928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:51.492 [2024-11-29 03:17:06.662960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:29:51.492 [2024-11-29 03:17:06.662982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.671624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.671775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:51.492 [2024-11-29 03:17:06.671851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.594 ms 00:29:51.492 [2024-11-29 03:17:06.671876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.692079] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:51.492 [2024-11-29 03:17:06.695813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.695993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:51.492 [2024-11-29 03:17:06.696097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.805 ms 00:29:51.492 [2024-11-29 03:17:06.696140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.760283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.760439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:51.492 [2024-11-29 03:17:06.760498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.075 ms 00:29:51.492 [2024-11-29 03:17:06.760587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.760781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.760861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:51.492 [2024-11-29 03:17:06.760920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:29:51.492 [2024-11-29 03:17:06.760948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.764781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.764906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:51.492 [2024-11-29 03:17:06.764959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.788 ms 00:29:51.492 [2024-11-29 03:17:06.764984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.767802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.767917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:51.492 [2024-11-29 03:17:06.767976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.777 ms 00:29:51.492 [2024-11-29 03:17:06.767998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.768307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.768339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:51.492 [2024-11-29 03:17:06.768359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:29:51.492 [2024-11-29 03:17:06.768753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.801242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.801300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:51.492 [2024-11-29 03:17:06.801316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.170 ms 00:29:51.492 [2024-11-29 03:17:06.801327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.806499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.806544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:51.492 [2024-11-29 03:17:06.806555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.122 ms 00:29:51.492 [2024-11-29 03:17:06.806565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.810497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.810538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:51.492 [2024-11-29 03:17:06.810547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.895 ms 00:29:51.492 [2024-11-29 03:17:06.810557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.815543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.815593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:51.492 [2024-11-29 03:17:06.815606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.950 ms 00:29:51.492 [2024-11-29 03:17:06.815618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.815659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.815676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:51.492 [2024-11-29 03:17:06.815685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:51.492 [2024-11-29 03:17:06.815694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.815760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:06.815772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:51.492 [2024-11-29 03:17:06.815780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:51.492 [2024-11-29 03:17:06.815792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:06.816802] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3772.260 ms, result 0 00:29:51.492 { 00:29:51.492 "name": "ftl0", 00:29:51.492 "uuid": "93a08837-8e91-4216-aa7c-0f9d19487196" 00:29:51.492 } 00:29:51.492 03:17:06 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:51.492 03:17:06 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:51.492 03:17:07 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:51.492 03:17:07 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:51.492 [2024-11-29 03:17:07.260999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.492 [2024-11-29 03:17:07.261049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:51.492 [2024-11-29 03:17:07.261067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:51.492 [2024-11-29 03:17:07.261076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.492 [2024-11-29 03:17:07.261103] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:51.493 [2024-11-29 03:17:07.261803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.493 [2024-11-29 03:17:07.261870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:51.493 [2024-11-29 03:17:07.261882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:29:51.493 [2024-11-29 03:17:07.261921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.493 [2024-11-29 03:17:07.262189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.493 [2024-11-29 03:17:07.262210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:51.493 [2024-11-29 03:17:07.262227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:29:51.493 [2024-11-29 03:17:07.262239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.493 [2024-11-29 03:17:07.265484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.493 [2024-11-29 03:17:07.265510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:51.493 [2024-11-29 03:17:07.265520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.228 ms 00:29:51.493 [2024-11-29 03:17:07.265531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.493 [2024-11-29 03:17:07.271789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.493 [2024-11-29 03:17:07.272005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:51.493 [2024-11-29 03:17:07.272026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.240 ms 00:29:51.493 [2024-11-29 03:17:07.272040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.493 [2024-11-29 03:17:07.275043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.493 [2024-11-29 03:17:07.275234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:51.493 [2024-11-29 03:17:07.275252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.889 ms 00:29:51.493 [2024-11-29 03:17:07.275265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.493 [2024-11-29 03:17:07.280808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.493 [2024-11-29 03:17:07.280890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:51.493 [2024-11-29 03:17:07.280903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.466 ms 00:29:51.493 [2024-11-29 03:17:07.280915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.493 [2024-11-29 03:17:07.281052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.493 [2024-11-29 03:17:07.281069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:51.493 [2024-11-29 03:17:07.281079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:29:51.493 [2024-11-29 03:17:07.281089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.493 [2024-11-29 03:17:07.284078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.493 [2024-11-29 03:17:07.284135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:51.493 [2024-11-29 03:17:07.284145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.971 ms 00:29:51.493 [2024-11-29 03:17:07.284154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.493 [2024-11-29 03:17:07.286893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.493 [2024-11-29 03:17:07.286939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:51.493 [2024-11-29 03:17:07.286949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.692 ms 00:29:51.493 [2024-11-29 03:17:07.286959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.493 [2024-11-29 03:17:07.289137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.493 [2024-11-29 03:17:07.289192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:51.493 [2024-11-29 03:17:07.289202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.121 ms 00:29:51.493 [2024-11-29 03:17:07.289211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.493 [2024-11-29 03:17:07.291512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.493 [2024-11-29 03:17:07.291567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:51.493 [2024-11-29 03:17:07.291578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.228 ms 00:29:51.493 [2024-11-29 03:17:07.291591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.493 [2024-11-29 03:17:07.291634] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:51.493 [2024-11-29 03:17:07.291651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.291997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:51.493 [2024-11-29 03:17:07.292167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:51.494 [2024-11-29 03:17:07.292601] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:51.494 [2024-11-29 03:17:07.292617] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 93a08837-8e91-4216-aa7c-0f9d19487196 00:29:51.494 [2024-11-29 03:17:07.292628] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:51.494 [2024-11-29 03:17:07.292635] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:51.494 [2024-11-29 03:17:07.292645] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:51.494 [2024-11-29 03:17:07.292654] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:51.494 [2024-11-29 03:17:07.292667] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:51.494 [2024-11-29 03:17:07.292675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:51.494 [2024-11-29 03:17:07.292685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:51.494 [2024-11-29 03:17:07.292692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:51.494 [2024-11-29 03:17:07.292701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:51.494 [2024-11-29 03:17:07.292708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.494 [2024-11-29 03:17:07.292718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:51.494 [2024-11-29 03:17:07.292726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:29:51.494 [2024-11-29 03:17:07.292736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.494 [2024-11-29 03:17:07.295209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.494 [2024-11-29 03:17:07.295359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:51.494 [2024-11-29 03:17:07.295424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.451 ms 00:29:51.494 [2024-11-29 03:17:07.295451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.494 [2024-11-29 03:17:07.295632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.494 [2024-11-29 03:17:07.295682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:51.494 [2024-11-29 03:17:07.295761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:29:51.494 [2024-11-29 03:17:07.295787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.494 [2024-11-29 03:17:07.303924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.494 [2024-11-29 03:17:07.304094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:51.494 [2024-11-29 03:17:07.304155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.494 [2024-11-29 03:17:07.304187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.494 [2024-11-29 03:17:07.304408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.494 [2024-11-29 03:17:07.304461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:51.494 [2024-11-29 03:17:07.304482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.494 [2024-11-29 03:17:07.304504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.494 [2024-11-29 03:17:07.304607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.494 [2024-11-29 03:17:07.304818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:51.494 [2024-11-29 03:17:07.304861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.494 [2024-11-29 03:17:07.304887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.494 [2024-11-29 03:17:07.304920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.494 [2024-11-29 03:17:07.304949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:51.494 [2024-11-29 03:17:07.304975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.494 [2024-11-29 03:17:07.305057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.494 [2024-11-29 03:17:07.318980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.494 [2024-11-29 03:17:07.319171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:51.494 [2024-11-29 03:17:07.319228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.494 [2024-11-29 03:17:07.319253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.494 [2024-11-29 03:17:07.329637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.494 [2024-11-29 03:17:07.329812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:51.494 [2024-11-29 03:17:07.329938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.494 [2024-11-29 03:17:07.329968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.494 [2024-11-29 03:17:07.330059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.494 [2024-11-29 03:17:07.330090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:51.494 [2024-11-29 03:17:07.330116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.494 [2024-11-29 03:17:07.330137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.494 [2024-11-29 03:17:07.330201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.494 [2024-11-29 03:17:07.330228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:51.495 [2024-11-29 03:17:07.330249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.495 [2024-11-29 03:17:07.330319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.495 [2024-11-29 03:17:07.330418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.495 [2024-11-29 03:17:07.330445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:51.495 [2024-11-29 03:17:07.330465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.495 [2024-11-29 03:17:07.330610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.495 [2024-11-29 03:17:07.330706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.495 [2024-11-29 03:17:07.330842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:51.495 [2024-11-29 03:17:07.330868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.495 [2024-11-29 03:17:07.330889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.495 [2024-11-29 03:17:07.330943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.495 [2024-11-29 03:17:07.330970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:51.495 [2024-11-29 03:17:07.330994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.495 [2024-11-29 03:17:07.331059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.495 [2024-11-29 03:17:07.331127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.495 [2024-11-29 03:17:07.331158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:51.495 [2024-11-29 03:17:07.331178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.495 [2024-11-29 03:17:07.331198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.495 [2024-11-29 03:17:07.331407] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.369 ms, result 0 00:29:51.495 true 00:29:51.495 03:17:07 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 94726 00:29:51.495 03:17:07 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94726 ']' 00:29:51.495 03:17:07 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94726 00:29:51.495 03:17:07 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:29:51.495 03:17:07 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:51.495 03:17:07 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94726 00:29:51.495 killing process with pid 94726 00:29:51.495 03:17:07 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:51.495 03:17:07 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:51.495 03:17:07 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94726' 00:29:51.495 03:17:07 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 94726 00:29:51.495 03:17:07 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 94726 00:29:56.768 03:17:11 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:00.970 262144+0 records in 00:30:00.970 262144+0 records out 00:30:00.970 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.16981 s, 258 MB/s 00:30:00.970 03:17:16 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:02.356 03:17:18 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:02.356 [2024-11-29 03:17:18.199859] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:30:02.356 [2024-11-29 03:17:18.199973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94935 ] 00:30:02.356 [2024-11-29 03:17:18.336158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.617 [2024-11-29 03:17:18.356300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.618 [2024-11-29 03:17:18.454764] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:02.618 [2024-11-29 03:17:18.454861] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:02.913 [2024-11-29 03:17:18.615465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.913 [2024-11-29 03:17:18.615525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:02.913 [2024-11-29 03:17:18.615541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:02.913 [2024-11-29 03:17:18.615549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.913 [2024-11-29 03:17:18.615614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.914 [2024-11-29 03:17:18.615625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:02.914 [2024-11-29 03:17:18.615635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:02.914 [2024-11-29 03:17:18.615651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.914 [2024-11-29 03:17:18.615678] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:02.914 [2024-11-29 03:17:18.615998] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:02.914 [2024-11-29 03:17:18.616026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.914 [2024-11-29 03:17:18.616036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:02.914 [2024-11-29 03:17:18.616051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:30:02.914 [2024-11-29 03:17:18.616059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.914 [2024-11-29 03:17:18.617780] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:02.914 [2024-11-29 03:17:18.621642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.914 [2024-11-29 03:17:18.621861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:02.914 [2024-11-29 03:17:18.621883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.863 ms 00:30:02.914 [2024-11-29 03:17:18.621917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.914 [2024-11-29 03:17:18.621986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.914 [2024-11-29 03:17:18.621999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:02.914 [2024-11-29 03:17:18.622009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:30:02.914 [2024-11-29 03:17:18.622017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.914 [2024-11-29 03:17:18.630308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.914 [2024-11-29 03:17:18.630354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:02.914 [2024-11-29 03:17:18.630371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.248 ms 00:30:02.914 [2024-11-29 03:17:18.630379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.914 [2024-11-29 03:17:18.630483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.914 [2024-11-29 03:17:18.630494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:02.914 [2024-11-29 03:17:18.630507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:30:02.914 [2024-11-29 03:17:18.630517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.914 [2024-11-29 03:17:18.630576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.914 [2024-11-29 03:17:18.630592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:02.914 [2024-11-29 03:17:18.630602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:02.914 [2024-11-29 03:17:18.630611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.914 [2024-11-29 03:17:18.630634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:02.914 [2024-11-29 03:17:18.632773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.914 [2024-11-29 03:17:18.632816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:02.914 [2024-11-29 03:17:18.632827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.144 ms 00:30:02.914 [2024-11-29 03:17:18.632855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.914 [2024-11-29 03:17:18.632890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.914 [2024-11-29 03:17:18.632903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:02.914 [2024-11-29 03:17:18.632912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:02.914 [2024-11-29 03:17:18.632922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.914 [2024-11-29 03:17:18.632948] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:02.914 [2024-11-29 03:17:18.632970] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:02.914 [2024-11-29 03:17:18.633011] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:02.914 [2024-11-29 03:17:18.633027] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:02.914 [2024-11-29 03:17:18.633132] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:02.914 [2024-11-29 03:17:18.633143] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:02.914 [2024-11-29 03:17:18.633157] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:02.914 [2024-11-29 03:17:18.633168] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:02.914 [2024-11-29 03:17:18.633181] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:02.914 [2024-11-29 03:17:18.633190] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:02.914 [2024-11-29 03:17:18.633197] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:02.914 [2024-11-29 03:17:18.633206] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:02.914 [2024-11-29 03:17:18.633213] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:02.914 [2024-11-29 03:17:18.633222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.914 [2024-11-29 03:17:18.633229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:02.914 [2024-11-29 03:17:18.633238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:30:02.914 [2024-11-29 03:17:18.633247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.914 [2024-11-29 03:17:18.633333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.914 [2024-11-29 03:17:18.633342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:02.914 [2024-11-29 03:17:18.633349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:30:02.914 [2024-11-29 03:17:18.633356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.914 [2024-11-29 03:17:18.633461] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:02.914 [2024-11-29 03:17:18.633473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:02.914 [2024-11-29 03:17:18.633482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:02.914 [2024-11-29 03:17:18.633495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:02.914 [2024-11-29 03:17:18.633504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:02.914 [2024-11-29 03:17:18.633512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:02.914 [2024-11-29 03:17:18.633520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:02.914 [2024-11-29 03:17:18.633529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:02.914 [2024-11-29 03:17:18.633537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:02.914 [2024-11-29 03:17:18.633545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:02.914 [2024-11-29 03:17:18.633555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:02.914 [2024-11-29 03:17:18.633565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:02.914 [2024-11-29 03:17:18.633572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:02.914 [2024-11-29 03:17:18.633580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:02.914 [2024-11-29 03:17:18.633592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:02.914 [2024-11-29 03:17:18.633600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:02.914 [2024-11-29 03:17:18.633608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:02.914 [2024-11-29 03:17:18.633617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:02.914 [2024-11-29 03:17:18.633624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:02.914 [2024-11-29 03:17:18.633631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:02.914 [2024-11-29 03:17:18.633639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:02.914 [2024-11-29 03:17:18.633647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:02.914 [2024-11-29 03:17:18.633654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:02.914 [2024-11-29 03:17:18.633662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:02.914 [2024-11-29 03:17:18.633669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:02.914 [2024-11-29 03:17:18.633677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:02.914 [2024-11-29 03:17:18.633691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:02.914 [2024-11-29 03:17:18.633699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:02.914 [2024-11-29 03:17:18.633707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:02.914 [2024-11-29 03:17:18.633715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:02.914 [2024-11-29 03:17:18.633722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:02.914 [2024-11-29 03:17:18.633730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:02.914 [2024-11-29 03:17:18.633738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:02.914 [2024-11-29 03:17:18.633745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:02.914 [2024-11-29 03:17:18.633753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:02.914 [2024-11-29 03:17:18.633760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:02.914 [2024-11-29 03:17:18.633767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:02.914 [2024-11-29 03:17:18.633774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:02.914 [2024-11-29 03:17:18.633782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:02.914 [2024-11-29 03:17:18.633788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:02.914 [2024-11-29 03:17:18.633795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:02.914 [2024-11-29 03:17:18.633801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:02.914 [2024-11-29 03:17:18.633812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:02.914 [2024-11-29 03:17:18.633818] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:02.914 [2024-11-29 03:17:18.634088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:02.915 [2024-11-29 03:17:18.634132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:02.915 [2024-11-29 03:17:18.634155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:02.915 [2024-11-29 03:17:18.634185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:02.915 [2024-11-29 03:17:18.634206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:02.915 [2024-11-29 03:17:18.634225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:02.915 [2024-11-29 03:17:18.634244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:02.915 [2024-11-29 03:17:18.634262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:02.915 [2024-11-29 03:17:18.634282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:02.915 [2024-11-29 03:17:18.634302] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:02.915 [2024-11-29 03:17:18.634341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:02.915 [2024-11-29 03:17:18.634425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:02.915 [2024-11-29 03:17:18.634458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:02.915 [2024-11-29 03:17:18.634486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:02.915 [2024-11-29 03:17:18.634518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:02.915 [2024-11-29 03:17:18.634547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:02.915 [2024-11-29 03:17:18.634574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:02.915 [2024-11-29 03:17:18.634601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:02.915 [2024-11-29 03:17:18.634629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:02.915 [2024-11-29 03:17:18.634690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:02.915 [2024-11-29 03:17:18.634728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:02.915 [2024-11-29 03:17:18.634756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:02.915 [2024-11-29 03:17:18.634783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:02.915 [2024-11-29 03:17:18.634811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:02.915 [2024-11-29 03:17:18.634856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:02.915 [2024-11-29 03:17:18.634921] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:02.915 [2024-11-29 03:17:18.634952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:02.915 [2024-11-29 03:17:18.634981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:02.915 [2024-11-29 03:17:18.635009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:02.915 [2024-11-29 03:17:18.635036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:02.915 [2024-11-29 03:17:18.635067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:02.915 [2024-11-29 03:17:18.635122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.635146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:02.915 [2024-11-29 03:17:18.635171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:30:02.915 [2024-11-29 03:17:18.635195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.649371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.649526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:02.915 [2024-11-29 03:17:18.649582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.083 ms 00:30:02.915 [2024-11-29 03:17:18.649605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.649698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.649708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:02.915 [2024-11-29 03:17:18.649717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:02.915 [2024-11-29 03:17:18.649732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.672324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.672519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:02.915 [2024-11-29 03:17:18.672590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.533 ms 00:30:02.915 [2024-11-29 03:17:18.672619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.672686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.672717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:02.915 [2024-11-29 03:17:18.672752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:02.915 [2024-11-29 03:17:18.672778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.673344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.673437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:02.915 [2024-11-29 03:17:18.673640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:30:02.915 [2024-11-29 03:17:18.673685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.673908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.674134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:02.915 [2024-11-29 03:17:18.674175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:30:02.915 [2024-11-29 03:17:18.674195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.682209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.682352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:02.915 [2024-11-29 03:17:18.682407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.967 ms 00:30:02.915 [2024-11-29 03:17:18.682430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.686198] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:02.915 [2024-11-29 03:17:18.686365] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:02.915 [2024-11-29 03:17:18.686429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.686451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:02.915 [2024-11-29 03:17:18.686471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.888 ms 00:30:02.915 [2024-11-29 03:17:18.686490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.701924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.702080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:02.915 [2024-11-29 03:17:18.702136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.312 ms 00:30:02.915 [2024-11-29 03:17:18.702167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.704901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.705038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:02.915 [2024-11-29 03:17:18.705090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.678 ms 00:30:02.915 [2024-11-29 03:17:18.705111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.707683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.707849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:02.915 [2024-11-29 03:17:18.707905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.464 ms 00:30:02.915 [2024-11-29 03:17:18.707925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.709199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.709561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:02.915 [2024-11-29 03:17:18.709751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:30:02.915 [2024-11-29 03:17:18.710449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.738717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.738972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:02.915 [2024-11-29 03:17:18.739041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.975 ms 00:30:02.915 [2024-11-29 03:17:18.739066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.747376] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:02.915 [2024-11-29 03:17:18.750485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.750632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:02.915 [2024-11-29 03:17:18.750694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.364 ms 00:30:02.915 [2024-11-29 03:17:18.750706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.750790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.750802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:02.915 [2024-11-29 03:17:18.750819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:02.915 [2024-11-29 03:17:18.750853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.750929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.915 [2024-11-29 03:17:18.750941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:02.915 [2024-11-29 03:17:18.750950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:02.915 [2024-11-29 03:17:18.750962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.915 [2024-11-29 03:17:18.750987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.916 [2024-11-29 03:17:18.751000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:02.916 [2024-11-29 03:17:18.751009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:02.916 [2024-11-29 03:17:18.751020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.916 [2024-11-29 03:17:18.751058] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:02.916 [2024-11-29 03:17:18.751068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.916 [2024-11-29 03:17:18.751077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:02.916 [2024-11-29 03:17:18.751086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:02.916 [2024-11-29 03:17:18.751100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.916 [2024-11-29 03:17:18.756503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.916 [2024-11-29 03:17:18.756551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:02.916 [2024-11-29 03:17:18.756572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.380 ms 00:30:02.916 [2024-11-29 03:17:18.756580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.916 [2024-11-29 03:17:18.756671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.916 [2024-11-29 03:17:18.756685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:02.916 [2024-11-29 03:17:18.756699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:02.916 [2024-11-29 03:17:18.756707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.916 [2024-11-29 03:17:18.758025] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 142.066 ms, result 0 00:30:03.857  [2024-11-29T03:17:20.788Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-29T03:17:22.171Z] Copying: 37/1024 [MB] (18 MBps) [2024-11-29T03:17:23.109Z] Copying: 52/1024 [MB] (14 MBps) [2024-11-29T03:17:24.047Z] Copying: 75/1024 [MB] (22 MBps) [2024-11-29T03:17:24.984Z] Copying: 110/1024 [MB] (35 MBps) [2024-11-29T03:17:25.926Z] Copying: 134/1024 [MB] (24 MBps) [2024-11-29T03:17:26.863Z] Copying: 163/1024 [MB] (29 MBps) [2024-11-29T03:17:27.797Z] Copying: 197/1024 [MB] (34 MBps) [2024-11-29T03:17:29.181Z] Copying: 244/1024 [MB] (47 MBps) [2024-11-29T03:17:30.115Z] Copying: 263/1024 [MB] (18 MBps) [2024-11-29T03:17:31.053Z] Copying: 299/1024 [MB] (36 MBps) [2024-11-29T03:17:31.992Z] Copying: 318/1024 [MB] (19 MBps) [2024-11-29T03:17:32.936Z] Copying: 349/1024 [MB] (30 MBps) [2024-11-29T03:17:33.879Z] Copying: 383/1024 [MB] (34 MBps) [2024-11-29T03:17:34.822Z] Copying: 402/1024 [MB] (19 MBps) [2024-11-29T03:17:35.794Z] Copying: 423/1024 [MB] (20 MBps) [2024-11-29T03:17:37.183Z] Copying: 443/1024 [MB] (19 MBps) [2024-11-29T03:17:38.127Z] Copying: 460/1024 [MB] (16 MBps) [2024-11-29T03:17:39.073Z] Copying: 477/1024 [MB] (17 MBps) [2024-11-29T03:17:40.020Z] Copying: 493/1024 [MB] (15 MBps) [2024-11-29T03:17:40.963Z] Copying: 513/1024 [MB] (19 MBps) [2024-11-29T03:17:41.908Z] Copying: 527/1024 [MB] (14 MBps) [2024-11-29T03:17:42.854Z] Copying: 546/1024 [MB] (18 MBps) [2024-11-29T03:17:43.797Z] Copying: 562/1024 [MB] (15 MBps) [2024-11-29T03:17:45.192Z] Copying: 575/1024 [MB] (13 MBps) [2024-11-29T03:17:46.135Z] Copying: 588/1024 [MB] (12 MBps) [2024-11-29T03:17:47.081Z] Copying: 598/1024 [MB] (10 MBps) [2024-11-29T03:17:48.024Z] Copying: 609/1024 [MB] (10 MBps) [2024-11-29T03:17:48.969Z] Copying: 619/1024 [MB] (10 MBps) [2024-11-29T03:17:49.914Z] Copying: 630/1024 [MB] (11 MBps) [2024-11-29T03:17:50.861Z] Copying: 641/1024 [MB] (11 MBps) [2024-11-29T03:17:51.807Z] Copying: 652/1024 [MB] (10 MBps) [2024-11-29T03:17:53.199Z] Copying: 664/1024 [MB] (11 MBps) [2024-11-29T03:17:53.773Z] Copying: 674/1024 [MB] (10 MBps) [2024-11-29T03:17:55.166Z] Copying: 684/1024 [MB] (10 MBps) [2024-11-29T03:17:56.136Z] Copying: 695/1024 [MB] (10 MBps) [2024-11-29T03:17:57.112Z] Copying: 706/1024 [MB] (11 MBps) [2024-11-29T03:17:58.052Z] Copying: 717/1024 [MB] (10 MBps) [2024-11-29T03:17:58.997Z] Copying: 734/1024 [MB] (17 MBps) [2024-11-29T03:17:59.938Z] Copying: 750/1024 [MB] (16 MBps) [2024-11-29T03:18:00.883Z] Copying: 773/1024 [MB] (22 MBps) [2024-11-29T03:18:01.826Z] Copying: 787/1024 [MB] (13 MBps) [2024-11-29T03:18:03.209Z] Copying: 797/1024 [MB] (10 MBps) [2024-11-29T03:18:03.782Z] Copying: 815/1024 [MB] (17 MBps) [2024-11-29T03:18:05.177Z] Copying: 839/1024 [MB] (24 MBps) [2024-11-29T03:18:06.126Z] Copying: 855/1024 [MB] (15 MBps) [2024-11-29T03:18:07.067Z] Copying: 874/1024 [MB] (19 MBps) [2024-11-29T03:18:08.009Z] Copying: 896/1024 [MB] (21 MBps) [2024-11-29T03:18:08.967Z] Copying: 915/1024 [MB] (19 MBps) [2024-11-29T03:18:09.911Z] Copying: 937/1024 [MB] (22 MBps) [2024-11-29T03:18:10.853Z] Copying: 954/1024 [MB] (16 MBps) [2024-11-29T03:18:11.797Z] Copying: 976/1024 [MB] (22 MBps) [2024-11-29T03:18:13.186Z] Copying: 997/1024 [MB] (20 MBps) [2024-11-29T03:18:13.448Z] Copying: 1011/1024 [MB] (14 MBps) [2024-11-29T03:18:13.448Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-29 03:18:13.357132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.456 [2024-11-29 03:18:13.357193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:57.456 [2024-11-29 03:18:13.357209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:57.456 [2024-11-29 03:18:13.357241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.456 [2024-11-29 03:18:13.357264] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:57.456 [2024-11-29 03:18:13.358130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.456 [2024-11-29 03:18:13.358162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:57.456 [2024-11-29 03:18:13.358174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:30:57.456 [2024-11-29 03:18:13.358186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.456 [2024-11-29 03:18:13.360173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.456 [2024-11-29 03:18:13.360391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:57.456 [2024-11-29 03:18:13.360413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.955 ms 00:30:57.456 [2024-11-29 03:18:13.360422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.456 [2024-11-29 03:18:13.360471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.456 [2024-11-29 03:18:13.360481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:57.456 [2024-11-29 03:18:13.360490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:57.456 [2024-11-29 03:18:13.360498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.456 [2024-11-29 03:18:13.360555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.456 [2024-11-29 03:18:13.360564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:57.456 [2024-11-29 03:18:13.360573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:30:57.456 [2024-11-29 03:18:13.360586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.456 [2024-11-29 03:18:13.360599] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:57.456 [2024-11-29 03:18:13.360615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.360994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:57.457 [2024-11-29 03:18:13.361326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:57.458 [2024-11-29 03:18:13.361334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:57.458 [2024-11-29 03:18:13.361341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:57.458 [2024-11-29 03:18:13.361349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:57.458 [2024-11-29 03:18:13.361356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:57.458 [2024-11-29 03:18:13.361364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:57.458 [2024-11-29 03:18:13.361372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:57.458 [2024-11-29 03:18:13.361379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:57.458 [2024-11-29 03:18:13.361386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:57.458 [2024-11-29 03:18:13.361394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:57.458 [2024-11-29 03:18:13.361401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:57.458 [2024-11-29 03:18:13.361417] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:57.458 [2024-11-29 03:18:13.361425] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 93a08837-8e91-4216-aa7c-0f9d19487196 00:30:57.458 [2024-11-29 03:18:13.361433] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:57.458 [2024-11-29 03:18:13.361441] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:57.458 [2024-11-29 03:18:13.361448] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:57.458 [2024-11-29 03:18:13.361455] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:57.458 [2024-11-29 03:18:13.361462] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:57.458 [2024-11-29 03:18:13.361470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:57.458 [2024-11-29 03:18:13.361478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:57.458 [2024-11-29 03:18:13.361485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:57.458 [2024-11-29 03:18:13.361492] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:57.458 [2024-11-29 03:18:13.361500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.458 [2024-11-29 03:18:13.361507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:57.458 [2024-11-29 03:18:13.361519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:30:57.458 [2024-11-29 03:18:13.361527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.458 [2024-11-29 03:18:13.364312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.458 [2024-11-29 03:18:13.364363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:57.458 [2024-11-29 03:18:13.364386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.766 ms 00:30:57.458 [2024-11-29 03:18:13.364414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.458 [2024-11-29 03:18:13.364555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.458 [2024-11-29 03:18:13.364581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:57.458 [2024-11-29 03:18:13.364944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:30:57.458 [2024-11-29 03:18:13.364972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.458 [2024-11-29 03:18:13.372756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.458 [2024-11-29 03:18:13.372810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:57.458 [2024-11-29 03:18:13.372821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.458 [2024-11-29 03:18:13.372847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.458 [2024-11-29 03:18:13.372913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.458 [2024-11-29 03:18:13.372927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:57.458 [2024-11-29 03:18:13.372935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.458 [2024-11-29 03:18:13.372949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.458 [2024-11-29 03:18:13.372984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.458 [2024-11-29 03:18:13.372993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:57.458 [2024-11-29 03:18:13.373001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.458 [2024-11-29 03:18:13.373009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.458 [2024-11-29 03:18:13.373029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.458 [2024-11-29 03:18:13.373039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:57.458 [2024-11-29 03:18:13.373049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.458 [2024-11-29 03:18:13.373061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.458 [2024-11-29 03:18:13.386868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.458 [2024-11-29 03:18:13.387046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:57.458 [2024-11-29 03:18:13.387064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.458 [2024-11-29 03:18:13.387072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.458 [2024-11-29 03:18:13.397085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.458 [2024-11-29 03:18:13.397246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:57.458 [2024-11-29 03:18:13.397271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.458 [2024-11-29 03:18:13.397279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.458 [2024-11-29 03:18:13.397333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.458 [2024-11-29 03:18:13.397343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:57.458 [2024-11-29 03:18:13.397351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.458 [2024-11-29 03:18:13.397358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.458 [2024-11-29 03:18:13.397392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.458 [2024-11-29 03:18:13.397402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:57.458 [2024-11-29 03:18:13.397410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.458 [2024-11-29 03:18:13.397421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.458 [2024-11-29 03:18:13.397475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.458 [2024-11-29 03:18:13.397484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:57.458 [2024-11-29 03:18:13.397492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.458 [2024-11-29 03:18:13.397500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.458 [2024-11-29 03:18:13.397532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.458 [2024-11-29 03:18:13.397541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:57.458 [2024-11-29 03:18:13.397549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.458 [2024-11-29 03:18:13.397557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.458 [2024-11-29 03:18:13.397600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.458 [2024-11-29 03:18:13.397609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:57.458 [2024-11-29 03:18:13.397617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.458 [2024-11-29 03:18:13.397625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.458 [2024-11-29 03:18:13.397673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.458 [2024-11-29 03:18:13.397683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:57.458 [2024-11-29 03:18:13.397691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.458 [2024-11-29 03:18:13.397702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.458 [2024-11-29 03:18:13.397876] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 40.661 ms, result 0 00:30:58.030 00:30:58.030 00:30:58.030 03:18:13 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:58.030 [2024-11-29 03:18:14.017100] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:30:58.030 [2024-11-29 03:18:14.017254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95491 ] 00:30:58.292 [2024-11-29 03:18:14.164793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.292 [2024-11-29 03:18:14.193262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.554 [2024-11-29 03:18:14.306312] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:58.554 [2024-11-29 03:18:14.306385] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:58.554 [2024-11-29 03:18:14.466671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.554 [2024-11-29 03:18:14.466732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:58.554 [2024-11-29 03:18:14.466748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:58.554 [2024-11-29 03:18:14.466756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.554 [2024-11-29 03:18:14.466818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.554 [2024-11-29 03:18:14.466853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:58.554 [2024-11-29 03:18:14.466863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:58.554 [2024-11-29 03:18:14.466881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.554 [2024-11-29 03:18:14.466910] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:58.554 [2024-11-29 03:18:14.467203] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:58.554 [2024-11-29 03:18:14.467223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.554 [2024-11-29 03:18:14.467235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:58.554 [2024-11-29 03:18:14.467252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:30:58.554 [2024-11-29 03:18:14.467260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.554 [2024-11-29 03:18:14.467543] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:58.554 [2024-11-29 03:18:14.467578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.554 [2024-11-29 03:18:14.467587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:58.554 [2024-11-29 03:18:14.467596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:58.554 [2024-11-29 03:18:14.467608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.554 [2024-11-29 03:18:14.467667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.554 [2024-11-29 03:18:14.467677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:58.554 [2024-11-29 03:18:14.467687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:58.554 [2024-11-29 03:18:14.467698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.554 [2024-11-29 03:18:14.468026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.554 [2024-11-29 03:18:14.468042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:58.554 [2024-11-29 03:18:14.468052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:30:58.554 [2024-11-29 03:18:14.468062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.554 [2024-11-29 03:18:14.468144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.554 [2024-11-29 03:18:14.468154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:58.554 [2024-11-29 03:18:14.468162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:58.554 [2024-11-29 03:18:14.468170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.554 [2024-11-29 03:18:14.468198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.554 [2024-11-29 03:18:14.468208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:58.554 [2024-11-29 03:18:14.468215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:58.554 [2024-11-29 03:18:14.468223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.554 [2024-11-29 03:18:14.468244] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:58.554 [2024-11-29 03:18:14.470411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.554 [2024-11-29 03:18:14.470607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:58.554 [2024-11-29 03:18:14.470634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:30:58.554 [2024-11-29 03:18:14.470643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.554 [2024-11-29 03:18:14.470689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.554 [2024-11-29 03:18:14.470699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:58.554 [2024-11-29 03:18:14.470713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:58.554 [2024-11-29 03:18:14.470721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.554 [2024-11-29 03:18:14.470782] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:58.554 [2024-11-29 03:18:14.470808] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:58.554 [2024-11-29 03:18:14.470875] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:58.554 [2024-11-29 03:18:14.470893] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:58.554 [2024-11-29 03:18:14.470999] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:58.554 [2024-11-29 03:18:14.471010] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:58.554 [2024-11-29 03:18:14.471025] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:58.554 [2024-11-29 03:18:14.471035] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:58.554 [2024-11-29 03:18:14.471050] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:58.554 [2024-11-29 03:18:14.471058] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:58.554 [2024-11-29 03:18:14.471065] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:58.554 [2024-11-29 03:18:14.471073] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:58.555 [2024-11-29 03:18:14.471080] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:58.555 [2024-11-29 03:18:14.471093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.555 [2024-11-29 03:18:14.471100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:58.555 [2024-11-29 03:18:14.471108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:30:58.555 [2024-11-29 03:18:14.471115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.555 [2024-11-29 03:18:14.471207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.555 [2024-11-29 03:18:14.471216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:58.555 [2024-11-29 03:18:14.471226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:58.555 [2024-11-29 03:18:14.471234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.555 [2024-11-29 03:18:14.471335] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:58.555 [2024-11-29 03:18:14.471346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:58.555 [2024-11-29 03:18:14.471359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:58.555 [2024-11-29 03:18:14.471367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.555 [2024-11-29 03:18:14.471376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:58.555 [2024-11-29 03:18:14.471383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:58.555 [2024-11-29 03:18:14.471390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:58.555 [2024-11-29 03:18:14.471397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:58.555 [2024-11-29 03:18:14.471405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:58.555 [2024-11-29 03:18:14.471412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:58.555 [2024-11-29 03:18:14.471419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:58.555 [2024-11-29 03:18:14.471428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:58.555 [2024-11-29 03:18:14.471435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:58.555 [2024-11-29 03:18:14.471442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:58.555 [2024-11-29 03:18:14.471450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:58.555 [2024-11-29 03:18:14.471458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.555 [2024-11-29 03:18:14.471465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:58.555 [2024-11-29 03:18:14.471471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:58.555 [2024-11-29 03:18:14.471480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.555 [2024-11-29 03:18:14.471487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:58.555 [2024-11-29 03:18:14.471493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:58.555 [2024-11-29 03:18:14.471500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.555 [2024-11-29 03:18:14.471508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:58.555 [2024-11-29 03:18:14.471515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:58.555 [2024-11-29 03:18:14.471521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.555 [2024-11-29 03:18:14.471528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:58.555 [2024-11-29 03:18:14.471535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:58.555 [2024-11-29 03:18:14.471541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.555 [2024-11-29 03:18:14.471547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:58.555 [2024-11-29 03:18:14.471555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:58.555 [2024-11-29 03:18:14.471561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.555 [2024-11-29 03:18:14.471569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:58.555 [2024-11-29 03:18:14.471576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:58.555 [2024-11-29 03:18:14.471582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:58.555 [2024-11-29 03:18:14.471593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:58.555 [2024-11-29 03:18:14.471600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:58.555 [2024-11-29 03:18:14.471607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:58.555 [2024-11-29 03:18:14.471614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:58.555 [2024-11-29 03:18:14.471620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:58.555 [2024-11-29 03:18:14.471627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.555 [2024-11-29 03:18:14.471633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:58.555 [2024-11-29 03:18:14.471640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:58.555 [2024-11-29 03:18:14.471646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.555 [2024-11-29 03:18:14.471656] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:58.555 [2024-11-29 03:18:14.471664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:58.555 [2024-11-29 03:18:14.471672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:58.555 [2024-11-29 03:18:14.471682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.555 [2024-11-29 03:18:14.471690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:58.555 [2024-11-29 03:18:14.471697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:58.555 [2024-11-29 03:18:14.471703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:58.555 [2024-11-29 03:18:14.471713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:58.555 [2024-11-29 03:18:14.471719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:58.555 [2024-11-29 03:18:14.471726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:58.555 [2024-11-29 03:18:14.471735] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:58.555 [2024-11-29 03:18:14.471744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:58.555 [2024-11-29 03:18:14.471753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:58.555 [2024-11-29 03:18:14.471760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:58.555 [2024-11-29 03:18:14.471767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:58.555 [2024-11-29 03:18:14.471775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:58.555 [2024-11-29 03:18:14.471783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:58.555 [2024-11-29 03:18:14.471790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:58.555 [2024-11-29 03:18:14.471797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:58.555 [2024-11-29 03:18:14.471804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:58.555 [2024-11-29 03:18:14.471811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:58.555 [2024-11-29 03:18:14.471818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:58.555 [2024-11-29 03:18:14.471840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:58.555 [2024-11-29 03:18:14.471857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:58.555 [2024-11-29 03:18:14.471864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:58.555 [2024-11-29 03:18:14.471872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:58.555 [2024-11-29 03:18:14.471879] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:58.555 [2024-11-29 03:18:14.471887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:58.555 [2024-11-29 03:18:14.471897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:58.555 [2024-11-29 03:18:14.471904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:58.555 [2024-11-29 03:18:14.471912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:58.555 [2024-11-29 03:18:14.471919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:58.555 [2024-11-29 03:18:14.471929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.555 [2024-11-29 03:18:14.471937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:58.555 [2024-11-29 03:18:14.471945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:30:58.555 [2024-11-29 03:18:14.471968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.555 [2024-11-29 03:18:14.482350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.555 [2024-11-29 03:18:14.482402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:58.555 [2024-11-29 03:18:14.482414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.337 ms 00:30:58.555 [2024-11-29 03:18:14.482422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.555 [2024-11-29 03:18:14.482510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.555 [2024-11-29 03:18:14.482518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:58.555 [2024-11-29 03:18:14.482526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:30:58.555 [2024-11-29 03:18:14.482539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.555 [2024-11-29 03:18:14.505333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.555 [2024-11-29 03:18:14.505393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:58.555 [2024-11-29 03:18:14.505412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.730 ms 00:30:58.555 [2024-11-29 03:18:14.505425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.556 [2024-11-29 03:18:14.505476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.556 [2024-11-29 03:18:14.505492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:58.556 [2024-11-29 03:18:14.505502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:58.556 [2024-11-29 03:18:14.505511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.556 [2024-11-29 03:18:14.505638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.556 [2024-11-29 03:18:14.505654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:58.556 [2024-11-29 03:18:14.505663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:58.556 [2024-11-29 03:18:14.505672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.556 [2024-11-29 03:18:14.505810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.556 [2024-11-29 03:18:14.505823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:58.556 [2024-11-29 03:18:14.505861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:30:58.556 [2024-11-29 03:18:14.505870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.556 [2024-11-29 03:18:14.513718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.556 [2024-11-29 03:18:14.513949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:58.556 [2024-11-29 03:18:14.513977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.826 ms 00:30:58.556 [2024-11-29 03:18:14.513992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.556 [2024-11-29 03:18:14.514120] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:58.556 [2024-11-29 03:18:14.514137] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:58.556 [2024-11-29 03:18:14.514149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.556 [2024-11-29 03:18:14.514163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:58.556 [2024-11-29 03:18:14.514173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:58.556 [2024-11-29 03:18:14.514184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.556 [2024-11-29 03:18:14.526999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.556 [2024-11-29 03:18:14.527053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:58.556 [2024-11-29 03:18:14.527067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.797 ms 00:30:58.556 [2024-11-29 03:18:14.527075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.556 [2024-11-29 03:18:14.527207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.556 [2024-11-29 03:18:14.527217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:58.556 [2024-11-29 03:18:14.527225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:30:58.556 [2024-11-29 03:18:14.527236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.556 [2024-11-29 03:18:14.527289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.556 [2024-11-29 03:18:14.527302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:58.556 [2024-11-29 03:18:14.527315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:58.556 [2024-11-29 03:18:14.527323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.556 [2024-11-29 03:18:14.527631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.556 [2024-11-29 03:18:14.527641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:58.556 [2024-11-29 03:18:14.527655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:30:58.556 [2024-11-29 03:18:14.527662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.556 [2024-11-29 03:18:14.527677] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:58.556 [2024-11-29 03:18:14.527687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.556 [2024-11-29 03:18:14.527697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:58.556 [2024-11-29 03:18:14.527704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:58.556 [2024-11-29 03:18:14.527712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.556 [2024-11-29 03:18:14.537206] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:58.556 [2024-11-29 03:18:14.537508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.556 [2024-11-29 03:18:14.537526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:58.556 [2024-11-29 03:18:14.537538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.778 ms 00:30:58.556 [2024-11-29 03:18:14.537553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.556 [2024-11-29 03:18:14.539974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.556 [2024-11-29 03:18:14.540013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:58.556 [2024-11-29 03:18:14.540024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.391 ms 00:30:58.556 [2024-11-29 03:18:14.540034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.556 [2024-11-29 03:18:14.540136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.556 [2024-11-29 03:18:14.540147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:58.556 [2024-11-29 03:18:14.540156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:58.556 [2024-11-29 03:18:14.540167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.556 [2024-11-29 03:18:14.540192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.556 [2024-11-29 03:18:14.540201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:58.556 [2024-11-29 03:18:14.540209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:58.556 [2024-11-29 03:18:14.540216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.556 [2024-11-29 03:18:14.540252] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:58.556 [2024-11-29 03:18:14.540263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.556 [2024-11-29 03:18:14.540270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:58.556 [2024-11-29 03:18:14.540279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:58.556 [2024-11-29 03:18:14.540286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.818 [2024-11-29 03:18:14.547333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.818 [2024-11-29 03:18:14.547514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:58.818 [2024-11-29 03:18:14.547575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.017 ms 00:30:58.818 [2024-11-29 03:18:14.547599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.818 [2024-11-29 03:18:14.547803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.818 [2024-11-29 03:18:14.548070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:58.818 [2024-11-29 03:18:14.548099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:30:58.818 [2024-11-29 03:18:14.548122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.818 [2024-11-29 03:18:14.549365] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 82.250 ms, result 0 00:30:59.762  [2024-11-29T03:18:17.142Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-29T03:18:18.087Z] Copying: 33/1024 [MB] (12 MBps) [2024-11-29T03:18:19.030Z] Copying: 49/1024 [MB] (15 MBps) [2024-11-29T03:18:19.973Z] Copying: 60/1024 [MB] (10 MBps) [2024-11-29T03:18:20.962Z] Copying: 74/1024 [MB] (14 MBps) [2024-11-29T03:18:21.922Z] Copying: 89/1024 [MB] (15 MBps) [2024-11-29T03:18:22.868Z] Copying: 108/1024 [MB] (18 MBps) [2024-11-29T03:18:23.813Z] Copying: 127/1024 [MB] (18 MBps) [2024-11-29T03:18:24.757Z] Copying: 137/1024 [MB] (10 MBps) [2024-11-29T03:18:26.141Z] Copying: 148/1024 [MB] (10 MBps) [2024-11-29T03:18:27.086Z] Copying: 158/1024 [MB] (10 MBps) [2024-11-29T03:18:28.032Z] Copying: 184/1024 [MB] (26 MBps) [2024-11-29T03:18:28.979Z] Copying: 198/1024 [MB] (13 MBps) [2024-11-29T03:18:29.922Z] Copying: 209/1024 [MB] (10 MBps) [2024-11-29T03:18:30.864Z] Copying: 225/1024 [MB] (16 MBps) [2024-11-29T03:18:31.806Z] Copying: 239/1024 [MB] (13 MBps) [2024-11-29T03:18:32.751Z] Copying: 254/1024 [MB] (15 MBps) [2024-11-29T03:18:34.139Z] Copying: 267/1024 [MB] (12 MBps) [2024-11-29T03:18:35.082Z] Copying: 284/1024 [MB] (17 MBps) [2024-11-29T03:18:36.024Z] Copying: 299/1024 [MB] (14 MBps) [2024-11-29T03:18:36.967Z] Copying: 312/1024 [MB] (13 MBps) [2024-11-29T03:18:37.912Z] Copying: 323/1024 [MB] (11 MBps) [2024-11-29T03:18:38.853Z] Copying: 337/1024 [MB] (13 MBps) [2024-11-29T03:18:39.795Z] Copying: 350/1024 [MB] (13 MBps) [2024-11-29T03:18:40.737Z] Copying: 363/1024 [MB] (12 MBps) [2024-11-29T03:18:42.124Z] Copying: 374/1024 [MB] (10 MBps) [2024-11-29T03:18:43.068Z] Copying: 391/1024 [MB] (17 MBps) [2024-11-29T03:18:44.012Z] Copying: 405/1024 [MB] (14 MBps) [2024-11-29T03:18:44.957Z] Copying: 426/1024 [MB] (20 MBps) [2024-11-29T03:18:45.899Z] Copying: 439/1024 [MB] (13 MBps) [2024-11-29T03:18:46.842Z] Copying: 452/1024 [MB] (13 MBps) [2024-11-29T03:18:47.786Z] Copying: 479/1024 [MB] (26 MBps) [2024-11-29T03:18:49.171Z] Copying: 498/1024 [MB] (19 MBps) [2024-11-29T03:18:49.745Z] Copying: 512/1024 [MB] (14 MBps) [2024-11-29T03:18:51.134Z] Copying: 524/1024 [MB] (11 MBps) [2024-11-29T03:18:52.079Z] Copying: 540/1024 [MB] (15 MBps) [2024-11-29T03:18:53.024Z] Copying: 557/1024 [MB] (17 MBps) [2024-11-29T03:18:53.968Z] Copying: 573/1024 [MB] (15 MBps) [2024-11-29T03:18:54.912Z] Copying: 592/1024 [MB] (19 MBps) [2024-11-29T03:18:55.909Z] Copying: 610/1024 [MB] (18 MBps) [2024-11-29T03:18:56.877Z] Copying: 626/1024 [MB] (16 MBps) [2024-11-29T03:18:57.820Z] Copying: 641/1024 [MB] (15 MBps) [2024-11-29T03:18:58.762Z] Copying: 661/1024 [MB] (19 MBps) [2024-11-29T03:19:00.151Z] Copying: 681/1024 [MB] (19 MBps) [2024-11-29T03:19:01.098Z] Copying: 693/1024 [MB] (12 MBps) [2024-11-29T03:19:02.040Z] Copying: 716/1024 [MB] (23 MBps) [2024-11-29T03:19:02.986Z] Copying: 740/1024 [MB] (23 MBps) [2024-11-29T03:19:03.931Z] Copying: 755/1024 [MB] (14 MBps) [2024-11-29T03:19:04.872Z] Copying: 772/1024 [MB] (17 MBps) [2024-11-29T03:19:05.814Z] Copying: 791/1024 [MB] (18 MBps) [2024-11-29T03:19:06.760Z] Copying: 802/1024 [MB] (10 MBps) [2024-11-29T03:19:08.149Z] Copying: 813/1024 [MB] (10 MBps) [2024-11-29T03:19:09.095Z] Copying: 825/1024 [MB] (11 MBps) [2024-11-29T03:19:10.039Z] Copying: 835/1024 [MB] (10 MBps) [2024-11-29T03:19:10.984Z] Copying: 847/1024 [MB] (11 MBps) [2024-11-29T03:19:11.930Z] Copying: 865/1024 [MB] (17 MBps) [2024-11-29T03:19:12.875Z] Copying: 881/1024 [MB] (16 MBps) [2024-11-29T03:19:13.818Z] Copying: 899/1024 [MB] (18 MBps) [2024-11-29T03:19:14.762Z] Copying: 917/1024 [MB] (17 MBps) [2024-11-29T03:19:16.151Z] Copying: 930/1024 [MB] (12 MBps) [2024-11-29T03:19:17.096Z] Copying: 947/1024 [MB] (16 MBps) [2024-11-29T03:19:18.041Z] Copying: 965/1024 [MB] (18 MBps) [2024-11-29T03:19:18.984Z] Copying: 989/1024 [MB] (23 MBps) [2024-11-29T03:19:19.926Z] Copying: 1008/1024 [MB] (19 MBps) [2024-11-29T03:19:20.186Z] Copying: 1020/1024 [MB] (11 MBps) [2024-11-29T03:19:20.772Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-29 03:19:20.492960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.780 [2024-11-29 03:19:20.493232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:04.780 [2024-11-29 03:19:20.493485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:04.780 [2024-11-29 03:19:20.493529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.780 [2024-11-29 03:19:20.493599] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:04.780 [2024-11-29 03:19:20.494538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.780 [2024-11-29 03:19:20.494671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:04.780 [2024-11-29 03:19:20.494737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:32:04.780 [2024-11-29 03:19:20.494766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.780 [2024-11-29 03:19:20.495046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.780 [2024-11-29 03:19:20.495090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:04.780 [2024-11-29 03:19:20.495114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:32:04.780 [2024-11-29 03:19:20.495135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.780 [2024-11-29 03:19:20.495189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.780 [2024-11-29 03:19:20.495212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:04.780 [2024-11-29 03:19:20.495340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:04.780 [2024-11-29 03:19:20.495368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.780 [2024-11-29 03:19:20.495451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.780 [2024-11-29 03:19:20.495474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:04.780 [2024-11-29 03:19:20.495495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:04.780 [2024-11-29 03:19:20.495514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.781 [2024-11-29 03:19:20.495541] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:04.781 [2024-11-29 03:19:20.495640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.495683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.495714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.495743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.495773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.495872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.495941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.496824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.497735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.498796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.498806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.498816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:04.781 [2024-11-29 03:19:20.498843] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:04.781 [2024-11-29 03:19:20.498855] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 93a08837-8e91-4216-aa7c-0f9d19487196 00:32:04.781 [2024-11-29 03:19:20.498865] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:04.781 [2024-11-29 03:19:20.498873] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:04.781 [2024-11-29 03:19:20.498882] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:04.781 [2024-11-29 03:19:20.498891] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:04.781 [2024-11-29 03:19:20.498904] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:04.781 [2024-11-29 03:19:20.498912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:04.781 [2024-11-29 03:19:20.498920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:04.781 [2024-11-29 03:19:20.498927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:04.781 [2024-11-29 03:19:20.498934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:04.781 [2024-11-29 03:19:20.498944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.781 [2024-11-29 03:19:20.498953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:04.781 [2024-11-29 03:19:20.498964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.404 ms 00:32:04.781 [2024-11-29 03:19:20.498975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.781 [2024-11-29 03:19:20.501513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.781 [2024-11-29 03:19:20.501539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:04.781 [2024-11-29 03:19:20.501550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.503 ms 00:32:04.781 [2024-11-29 03:19:20.501559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.781 [2024-11-29 03:19:20.501685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.781 [2024-11-29 03:19:20.501695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:04.781 [2024-11-29 03:19:20.501709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:32:04.781 [2024-11-29 03:19:20.501716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.781 [2024-11-29 03:19:20.510000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.781 [2024-11-29 03:19:20.510165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:04.781 [2024-11-29 03:19:20.510222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.781 [2024-11-29 03:19:20.510254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.781 [2024-11-29 03:19:20.510330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.781 [2024-11-29 03:19:20.510396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:04.781 [2024-11-29 03:19:20.510424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.781 [2024-11-29 03:19:20.510466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.781 [2024-11-29 03:19:20.510555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.781 [2024-11-29 03:19:20.510583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:04.781 [2024-11-29 03:19:20.510656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.781 [2024-11-29 03:19:20.510682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.782 [2024-11-29 03:19:20.510714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.782 [2024-11-29 03:19:20.510736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:04.782 [2024-11-29 03:19:20.510758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.782 [2024-11-29 03:19:20.510782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.782 [2024-11-29 03:19:20.526755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.782 [2024-11-29 03:19:20.526948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:04.782 [2024-11-29 03:19:20.527020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.782 [2024-11-29 03:19:20.527045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.782 [2024-11-29 03:19:20.538851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.782 [2024-11-29 03:19:20.539020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:04.782 [2024-11-29 03:19:20.539085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.782 [2024-11-29 03:19:20.539108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.782 [2024-11-29 03:19:20.539175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.782 [2024-11-29 03:19:20.539198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:04.782 [2024-11-29 03:19:20.539218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.782 [2024-11-29 03:19:20.539238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.782 [2024-11-29 03:19:20.539291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.782 [2024-11-29 03:19:20.539313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:04.782 [2024-11-29 03:19:20.539334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.782 [2024-11-29 03:19:20.539384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.782 [2024-11-29 03:19:20.539466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.782 [2024-11-29 03:19:20.539490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:04.782 [2024-11-29 03:19:20.539510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.782 [2024-11-29 03:19:20.539565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.782 [2024-11-29 03:19:20.539614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.782 [2024-11-29 03:19:20.539637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:04.782 [2024-11-29 03:19:20.539658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.782 [2024-11-29 03:19:20.539678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.782 [2024-11-29 03:19:20.539769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.782 [2024-11-29 03:19:20.539796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:04.782 [2024-11-29 03:19:20.539817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.782 [2024-11-29 03:19:20.539866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.782 [2024-11-29 03:19:20.539940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.782 [2024-11-29 03:19:20.540012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:04.782 [2024-11-29 03:19:20.540037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.782 [2024-11-29 03:19:20.540056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.782 [2024-11-29 03:19:20.540220] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 47.236 ms, result 0 00:32:04.782 00:32:04.782 00:32:05.042 03:19:20 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:07.597 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:07.597 03:19:22 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:32:07.597 [2024-11-29 03:19:23.056938] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:32:07.597 [2024-11-29 03:19:23.057245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96196 ] 00:32:07.597 [2024-11-29 03:19:23.204955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.597 [2024-11-29 03:19:23.233643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.597 [2024-11-29 03:19:23.350296] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:07.597 [2024-11-29 03:19:23.350614] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:07.597 [2024-11-29 03:19:23.510546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.597 [2024-11-29 03:19:23.510603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:07.597 [2024-11-29 03:19:23.510618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:07.597 [2024-11-29 03:19:23.510627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.597 [2024-11-29 03:19:23.510681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.597 [2024-11-29 03:19:23.510692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:07.597 [2024-11-29 03:19:23.510705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:07.597 [2024-11-29 03:19:23.510719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.597 [2024-11-29 03:19:23.510747] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:07.597 [2024-11-29 03:19:23.511056] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:07.597 [2024-11-29 03:19:23.511076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.597 [2024-11-29 03:19:23.511085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:07.597 [2024-11-29 03:19:23.511097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:32:07.597 [2024-11-29 03:19:23.511104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.597 [2024-11-29 03:19:23.511380] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:07.597 [2024-11-29 03:19:23.511405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.597 [2024-11-29 03:19:23.511413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:07.597 [2024-11-29 03:19:23.511423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:07.597 [2024-11-29 03:19:23.511442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.597 [2024-11-29 03:19:23.511500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.597 [2024-11-29 03:19:23.511510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:07.597 [2024-11-29 03:19:23.511519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:07.597 [2024-11-29 03:19:23.511530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.597 [2024-11-29 03:19:23.511812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.597 [2024-11-29 03:19:23.511824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:07.597 [2024-11-29 03:19:23.511859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:32:07.597 [2024-11-29 03:19:23.511867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.598 [2024-11-29 03:19:23.511948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.598 [2024-11-29 03:19:23.511958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:07.598 [2024-11-29 03:19:23.511966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:32:07.598 [2024-11-29 03:19:23.511974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.598 [2024-11-29 03:19:23.512001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.598 [2024-11-29 03:19:23.512010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:07.598 [2024-11-29 03:19:23.512019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:07.598 [2024-11-29 03:19:23.512027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.598 [2024-11-29 03:19:23.512049] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:07.598 [2024-11-29 03:19:23.514141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.598 [2024-11-29 03:19:23.514182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:07.598 [2024-11-29 03:19:23.514193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.094 ms 00:32:07.598 [2024-11-29 03:19:23.514202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.598 [2024-11-29 03:19:23.514240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.598 [2024-11-29 03:19:23.514254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:07.598 [2024-11-29 03:19:23.514267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:07.598 [2024-11-29 03:19:23.514279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.598 [2024-11-29 03:19:23.514313] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:07.598 [2024-11-29 03:19:23.514340] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:07.598 [2024-11-29 03:19:23.514384] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:07.598 [2024-11-29 03:19:23.514402] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:07.598 [2024-11-29 03:19:23.514507] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:07.598 [2024-11-29 03:19:23.514519] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:07.598 [2024-11-29 03:19:23.514531] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:07.598 [2024-11-29 03:19:23.514542] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:07.598 [2024-11-29 03:19:23.514557] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:07.598 [2024-11-29 03:19:23.514567] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:07.598 [2024-11-29 03:19:23.514576] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:07.598 [2024-11-29 03:19:23.514585] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:07.598 [2024-11-29 03:19:23.514597] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:07.598 [2024-11-29 03:19:23.514606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.598 [2024-11-29 03:19:23.514614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:07.598 [2024-11-29 03:19:23.514622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:32:07.598 [2024-11-29 03:19:23.514629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.598 [2024-11-29 03:19:23.514729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.598 [2024-11-29 03:19:23.514738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:07.598 [2024-11-29 03:19:23.514750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:07.598 [2024-11-29 03:19:23.514757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.598 [2024-11-29 03:19:23.514888] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:07.598 [2024-11-29 03:19:23.514900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:07.598 [2024-11-29 03:19:23.514918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:07.598 [2024-11-29 03:19:23.514929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.598 [2024-11-29 03:19:23.514937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:07.598 [2024-11-29 03:19:23.514944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:07.598 [2024-11-29 03:19:23.514951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:07.598 [2024-11-29 03:19:23.514959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:07.598 [2024-11-29 03:19:23.514966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:07.598 [2024-11-29 03:19:23.514972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:07.598 [2024-11-29 03:19:23.514979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:07.598 [2024-11-29 03:19:23.514986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:07.598 [2024-11-29 03:19:23.514993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:07.598 [2024-11-29 03:19:23.515000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:07.598 [2024-11-29 03:19:23.515010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:07.598 [2024-11-29 03:19:23.515016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.598 [2024-11-29 03:19:23.515024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:07.598 [2024-11-29 03:19:23.515031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:07.598 [2024-11-29 03:19:23.515039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.598 [2024-11-29 03:19:23.515046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:07.598 [2024-11-29 03:19:23.515053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:07.598 [2024-11-29 03:19:23.515060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.598 [2024-11-29 03:19:23.515066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:07.598 [2024-11-29 03:19:23.515072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:07.598 [2024-11-29 03:19:23.515079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.598 [2024-11-29 03:19:23.515085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:07.598 [2024-11-29 03:19:23.515091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:07.598 [2024-11-29 03:19:23.515097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.598 [2024-11-29 03:19:23.515105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:07.598 [2024-11-29 03:19:23.515112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:07.598 [2024-11-29 03:19:23.515118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:07.598 [2024-11-29 03:19:23.515124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:07.598 [2024-11-29 03:19:23.515130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:07.598 [2024-11-29 03:19:23.515136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:07.598 [2024-11-29 03:19:23.515147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:07.598 [2024-11-29 03:19:23.515153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:07.598 [2024-11-29 03:19:23.515159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:07.598 [2024-11-29 03:19:23.515166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:07.598 [2024-11-29 03:19:23.515172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:07.598 [2024-11-29 03:19:23.515179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.598 [2024-11-29 03:19:23.515185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:07.598 [2024-11-29 03:19:23.515191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:07.598 [2024-11-29 03:19:23.515198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.598 [2024-11-29 03:19:23.515205] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:07.598 [2024-11-29 03:19:23.515213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:07.598 [2024-11-29 03:19:23.515219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:07.598 [2024-11-29 03:19:23.515230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:07.598 [2024-11-29 03:19:23.515238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:07.598 [2024-11-29 03:19:23.515246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:07.598 [2024-11-29 03:19:23.515252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:07.598 [2024-11-29 03:19:23.515261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:07.598 [2024-11-29 03:19:23.515268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:07.598 [2024-11-29 03:19:23.515274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:07.598 [2024-11-29 03:19:23.515282] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:07.598 [2024-11-29 03:19:23.515292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:07.598 [2024-11-29 03:19:23.515301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:07.598 [2024-11-29 03:19:23.515309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:07.598 [2024-11-29 03:19:23.515316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:07.598 [2024-11-29 03:19:23.515322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:07.598 [2024-11-29 03:19:23.515329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:07.598 [2024-11-29 03:19:23.515336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:07.598 [2024-11-29 03:19:23.515343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:07.598 [2024-11-29 03:19:23.515351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:07.599 [2024-11-29 03:19:23.515358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:07.599 [2024-11-29 03:19:23.515365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:07.599 [2024-11-29 03:19:23.515372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:07.599 [2024-11-29 03:19:23.515386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:07.599 [2024-11-29 03:19:23.515393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:07.599 [2024-11-29 03:19:23.515400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:07.599 [2024-11-29 03:19:23.515408] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:07.599 [2024-11-29 03:19:23.515416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:07.599 [2024-11-29 03:19:23.515424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:07.599 [2024-11-29 03:19:23.515431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:07.599 [2024-11-29 03:19:23.515438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:07.599 [2024-11-29 03:19:23.515445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:07.599 [2024-11-29 03:19:23.515452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.515459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:07.599 [2024-11-29 03:19:23.515467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:32:07.599 [2024-11-29 03:19:23.515475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.525444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.525608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:07.599 [2024-11-29 03:19:23.525669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.926 ms 00:32:07.599 [2024-11-29 03:19:23.525692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.525797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.525820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:07.599 [2024-11-29 03:19:23.525865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:32:07.599 [2024-11-29 03:19:23.525885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.547105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.547318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:07.599 [2024-11-29 03:19:23.547394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.130 ms 00:32:07.599 [2024-11-29 03:19:23.547424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.547495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.547526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:07.599 [2024-11-29 03:19:23.547552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:07.599 [2024-11-29 03:19:23.547577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.547727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.547989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:07.599 [2024-11-29 03:19:23.548039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:32:07.599 [2024-11-29 03:19:23.548135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.548376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.548422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:07.599 [2024-11-29 03:19:23.548450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:32:07.599 [2024-11-29 03:19:23.548572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.556770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.556948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:07.599 [2024-11-29 03:19:23.557028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.150 ms 00:32:07.599 [2024-11-29 03:19:23.557058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.557208] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:07.599 [2024-11-29 03:19:23.557258] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:07.599 [2024-11-29 03:19:23.557374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.557403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:07.599 [2024-11-29 03:19:23.557430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:32:07.599 [2024-11-29 03:19:23.557462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.569880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.570038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:07.599 [2024-11-29 03:19:23.570061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.382 ms 00:32:07.599 [2024-11-29 03:19:23.570070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.570218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.570229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:07.599 [2024-11-29 03:19:23.570243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:32:07.599 [2024-11-29 03:19:23.570258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.570301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.570314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:07.599 [2024-11-29 03:19:23.570323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:07.599 [2024-11-29 03:19:23.570330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.570638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.570663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:07.599 [2024-11-29 03:19:23.570673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:32:07.599 [2024-11-29 03:19:23.570682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.570698] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:07.599 [2024-11-29 03:19:23.570709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.570724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:07.599 [2024-11-29 03:19:23.570733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:07.599 [2024-11-29 03:19:23.570745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.580078] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:07.599 [2024-11-29 03:19:23.580344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.580360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:07.599 [2024-11-29 03:19:23.580370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.579 ms 00:32:07.599 [2024-11-29 03:19:23.580378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.582869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.582909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:07.599 [2024-11-29 03:19:23.582923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.458 ms 00:32:07.599 [2024-11-29 03:19:23.582931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.583029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.583040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:07.599 [2024-11-29 03:19:23.583050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:32:07.599 [2024-11-29 03:19:23.583061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.583085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.583093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:07.599 [2024-11-29 03:19:23.583102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:07.599 [2024-11-29 03:19:23.583116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.599 [2024-11-29 03:19:23.583150] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:07.599 [2024-11-29 03:19:23.583160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.599 [2024-11-29 03:19:23.583167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:07.599 [2024-11-29 03:19:23.583175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:07.599 [2024-11-29 03:19:23.583187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.860 [2024-11-29 03:19:23.589219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.860 [2024-11-29 03:19:23.589384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:07.860 [2024-11-29 03:19:23.589442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.009 ms 00:32:07.860 [2024-11-29 03:19:23.589465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.860 [2024-11-29 03:19:23.589634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.860 [2024-11-29 03:19:23.589693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:07.860 [2024-11-29 03:19:23.589717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:32:07.860 [2024-11-29 03:19:23.589737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.860 [2024-11-29 03:19:23.592247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 81.230 ms, result 0 00:32:08.801  [2024-11-29T03:19:25.735Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-29T03:19:26.744Z] Copying: 25/1024 [MB] (11 MBps) [2024-11-29T03:19:27.686Z] Copying: 39/1024 [MB] (13 MBps) [2024-11-29T03:19:28.632Z] Copying: 54/1024 [MB] (14 MBps) [2024-11-29T03:19:30.018Z] Copying: 64/1024 [MB] (10 MBps) [2024-11-29T03:19:30.960Z] Copying: 74/1024 [MB] (10 MBps) [2024-11-29T03:19:31.906Z] Copying: 86424/1048576 [kB] (9956 kBps) [2024-11-29T03:19:32.850Z] Copying: 96612/1048576 [kB] (10188 kBps) [2024-11-29T03:19:33.795Z] Copying: 104/1024 [MB] (10 MBps) [2024-11-29T03:19:34.742Z] Copying: 117172/1048576 [kB] (10224 kBps) [2024-11-29T03:19:35.686Z] Copying: 124/1024 [MB] (10 MBps) [2024-11-29T03:19:36.631Z] Copying: 135/1024 [MB] (10 MBps) [2024-11-29T03:19:38.021Z] Copying: 145/1024 [MB] (10 MBps) [2024-11-29T03:19:38.965Z] Copying: 159/1024 [MB] (13 MBps) [2024-11-29T03:19:39.908Z] Copying: 173/1024 [MB] (13 MBps) [2024-11-29T03:19:40.853Z] Copying: 188/1024 [MB] (15 MBps) [2024-11-29T03:19:41.795Z] Copying: 208/1024 [MB] (20 MBps) [2024-11-29T03:19:42.737Z] Copying: 229/1024 [MB] (21 MBps) [2024-11-29T03:19:43.681Z] Copying: 247/1024 [MB] (18 MBps) [2024-11-29T03:19:44.636Z] Copying: 265/1024 [MB] (17 MBps) [2024-11-29T03:19:46.019Z] Copying: 284/1024 [MB] (19 MBps) [2024-11-29T03:19:46.960Z] Copying: 305/1024 [MB] (20 MBps) [2024-11-29T03:19:47.904Z] Copying: 326/1024 [MB] (21 MBps) [2024-11-29T03:19:48.847Z] Copying: 349/1024 [MB] (22 MBps) [2024-11-29T03:19:49.791Z] Copying: 367/1024 [MB] (18 MBps) [2024-11-29T03:19:50.734Z] Copying: 391/1024 [MB] (23 MBps) [2024-11-29T03:19:51.678Z] Copying: 402/1024 [MB] (11 MBps) [2024-11-29T03:19:52.623Z] Copying: 413/1024 [MB] (10 MBps) [2024-11-29T03:19:54.012Z] Copying: 423/1024 [MB] (10 MBps) [2024-11-29T03:19:54.950Z] Copying: 433/1024 [MB] (10 MBps) [2024-11-29T03:19:55.893Z] Copying: 466/1024 [MB] (32 MBps) [2024-11-29T03:19:56.839Z] Copying: 481/1024 [MB] (15 MBps) [2024-11-29T03:19:57.786Z] Copying: 492/1024 [MB] (10 MBps) [2024-11-29T03:19:58.751Z] Copying: 503/1024 [MB] (11 MBps) [2024-11-29T03:19:59.738Z] Copying: 515/1024 [MB] (12 MBps) [2024-11-29T03:20:00.683Z] Copying: 534/1024 [MB] (18 MBps) [2024-11-29T03:20:01.628Z] Copying: 545/1024 [MB] (11 MBps) [2024-11-29T03:20:03.018Z] Copying: 556/1024 [MB] (10 MBps) [2024-11-29T03:20:03.965Z] Copying: 566/1024 [MB] (10 MBps) [2024-11-29T03:20:04.912Z] Copying: 577/1024 [MB] (10 MBps) [2024-11-29T03:20:05.855Z] Copying: 588/1024 [MB] (10 MBps) [2024-11-29T03:20:06.799Z] Copying: 599/1024 [MB] (11 MBps) [2024-11-29T03:20:07.746Z] Copying: 613/1024 [MB] (13 MBps) [2024-11-29T03:20:08.707Z] Copying: 631/1024 [MB] (18 MBps) [2024-11-29T03:20:09.650Z] Copying: 647/1024 [MB] (15 MBps) [2024-11-29T03:20:11.033Z] Copying: 665/1024 [MB] (18 MBps) [2024-11-29T03:20:11.971Z] Copying: 685/1024 [MB] (19 MBps) [2024-11-29T03:20:12.909Z] Copying: 727/1024 [MB] (42 MBps) [2024-11-29T03:20:13.851Z] Copying: 758/1024 [MB] (30 MBps) [2024-11-29T03:20:14.787Z] Copying: 794/1024 [MB] (36 MBps) [2024-11-29T03:20:15.726Z] Copying: 835/1024 [MB] (41 MBps) [2024-11-29T03:20:16.670Z] Copying: 871/1024 [MB] (35 MBps) [2024-11-29T03:20:17.613Z] Copying: 892/1024 [MB] (21 MBps) [2024-11-29T03:20:19.000Z] Copying: 912/1024 [MB] (20 MBps) [2024-11-29T03:20:19.943Z] Copying: 932/1024 [MB] (19 MBps) [2024-11-29T03:20:20.887Z] Copying: 951/1024 [MB] (19 MBps) [2024-11-29T03:20:21.831Z] Copying: 972/1024 [MB] (20 MBps) [2024-11-29T03:20:22.772Z] Copying: 992/1024 [MB] (20 MBps) [2024-11-29T03:20:23.714Z] Copying: 1012/1024 [MB] (19 MBps) [2024-11-29T03:20:24.661Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-29T03:20:24.661Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-29 03:20:24.353287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.669 [2024-11-29 03:20:24.353488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:08.669 [2024-11-29 03:20:24.353573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:08.669 [2024-11-29 03:20:24.353600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.669 [2024-11-29 03:20:24.355822] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:08.669 [2024-11-29 03:20:24.359054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.669 [2024-11-29 03:20:24.359221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:08.669 [2024-11-29 03:20:24.359310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.067 ms 00:33:08.669 [2024-11-29 03:20:24.359337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.669 [2024-11-29 03:20:24.369904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.669 [2024-11-29 03:20:24.370091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:08.669 [2024-11-29 03:20:24.370152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.634 ms 00:33:08.669 [2024-11-29 03:20:24.370177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.669 [2024-11-29 03:20:24.370231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.669 [2024-11-29 03:20:24.370256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:08.669 [2024-11-29 03:20:24.370277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:08.669 [2024-11-29 03:20:24.370297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.669 [2024-11-29 03:20:24.370467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.669 [2024-11-29 03:20:24.370501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:08.669 [2024-11-29 03:20:24.370521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:33:08.669 [2024-11-29 03:20:24.370541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.669 [2024-11-29 03:20:24.370567] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:08.669 [2024-11-29 03:20:24.370621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130048 / 261120 wr_cnt: 1 state: open 00:33:08.669 [2024-11-29 03:20:24.370704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.370735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.370763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.370791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.370884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.370916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.370945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.370974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.371993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.372001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.372008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.372016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.372023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.372031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.372038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.372045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.372064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.372072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:08.670 [2024-11-29 03:20:24.372080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:08.671 [2024-11-29 03:20:24.372087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:08.671 [2024-11-29 03:20:24.372095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:08.671 [2024-11-29 03:20:24.372103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:08.671 [2024-11-29 03:20:24.372111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:08.671 [2024-11-29 03:20:24.372119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:08.671 [2024-11-29 03:20:24.372128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:08.671 [2024-11-29 03:20:24.372135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:08.671 [2024-11-29 03:20:24.372143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:08.671 [2024-11-29 03:20:24.372151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:08.671 [2024-11-29 03:20:24.372159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:08.671 [2024-11-29 03:20:24.372175] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:08.671 [2024-11-29 03:20:24.372188] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 93a08837-8e91-4216-aa7c-0f9d19487196 00:33:08.671 [2024-11-29 03:20:24.372197] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130048 00:33:08.671 [2024-11-29 03:20:24.372205] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130080 00:33:08.671 [2024-11-29 03:20:24.372212] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130048 00:33:08.671 [2024-11-29 03:20:24.372222] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:33:08.671 [2024-11-29 03:20:24.372237] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:08.671 [2024-11-29 03:20:24.372245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:08.671 [2024-11-29 03:20:24.372253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:08.671 [2024-11-29 03:20:24.372259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:08.671 [2024-11-29 03:20:24.372267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:08.671 [2024-11-29 03:20:24.372275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.671 [2024-11-29 03:20:24.372284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:08.671 [2024-11-29 03:20:24.372292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.709 ms 00:33:08.671 [2024-11-29 03:20:24.372300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.671 [2024-11-29 03:20:24.374684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.671 [2024-11-29 03:20:24.374851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:08.671 [2024-11-29 03:20:24.374878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.361 ms 00:33:08.671 [2024-11-29 03:20:24.374886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.671 [2024-11-29 03:20:24.375005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.671 [2024-11-29 03:20:24.375021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:08.671 [2024-11-29 03:20:24.375030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:33:08.671 [2024-11-29 03:20:24.375038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.671 [2024-11-29 03:20:24.382299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.671 [2024-11-29 03:20:24.382468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:08.671 [2024-11-29 03:20:24.382486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.671 [2024-11-29 03:20:24.382495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.671 [2024-11-29 03:20:24.382559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.671 [2024-11-29 03:20:24.382568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:08.671 [2024-11-29 03:20:24.382576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.671 [2024-11-29 03:20:24.382584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.671 [2024-11-29 03:20:24.382637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.671 [2024-11-29 03:20:24.382648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:08.671 [2024-11-29 03:20:24.382662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.671 [2024-11-29 03:20:24.382669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.671 [2024-11-29 03:20:24.382686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.671 [2024-11-29 03:20:24.382700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:08.671 [2024-11-29 03:20:24.382708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.671 [2024-11-29 03:20:24.382715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.671 [2024-11-29 03:20:24.396264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.671 [2024-11-29 03:20:24.396326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:08.671 [2024-11-29 03:20:24.396338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.671 [2024-11-29 03:20:24.396346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.671 [2024-11-29 03:20:24.407916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.671 [2024-11-29 03:20:24.407968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:08.671 [2024-11-29 03:20:24.407980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.671 [2024-11-29 03:20:24.407989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.671 [2024-11-29 03:20:24.408038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.671 [2024-11-29 03:20:24.408048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:08.671 [2024-11-29 03:20:24.408057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.671 [2024-11-29 03:20:24.408078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.671 [2024-11-29 03:20:24.408115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.671 [2024-11-29 03:20:24.408125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:08.671 [2024-11-29 03:20:24.408134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.671 [2024-11-29 03:20:24.408142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.671 [2024-11-29 03:20:24.408198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.671 [2024-11-29 03:20:24.408213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:08.671 [2024-11-29 03:20:24.408222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.671 [2024-11-29 03:20:24.408232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.671 [2024-11-29 03:20:24.408264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.671 [2024-11-29 03:20:24.408274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:08.671 [2024-11-29 03:20:24.408282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.671 [2024-11-29 03:20:24.408290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.671 [2024-11-29 03:20:24.408333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.671 [2024-11-29 03:20:24.408343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:08.671 [2024-11-29 03:20:24.408351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.671 [2024-11-29 03:20:24.408364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.671 [2024-11-29 03:20:24.408416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.671 [2024-11-29 03:20:24.408427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:08.671 [2024-11-29 03:20:24.408436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.671 [2024-11-29 03:20:24.408444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.671 [2024-11-29 03:20:24.408585] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 56.540 ms, result 0 00:33:09.630 00:33:09.630 00:33:09.630 03:20:25 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:33:09.630 [2024-11-29 03:20:25.496007] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:33:09.630 [2024-11-29 03:20:25.496151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96817 ] 00:33:09.890 [2024-11-29 03:20:25.639863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.890 [2024-11-29 03:20:25.667929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.890 [2024-11-29 03:20:25.783782] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:09.890 [2024-11-29 03:20:25.783897] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:10.153 [2024-11-29 03:20:25.944483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.153 [2024-11-29 03:20:25.944540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:10.153 [2024-11-29 03:20:25.944559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:10.153 [2024-11-29 03:20:25.944567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.153 [2024-11-29 03:20:25.944628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.153 [2024-11-29 03:20:25.944639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:10.153 [2024-11-29 03:20:25.944651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:33:10.153 [2024-11-29 03:20:25.944666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.153 [2024-11-29 03:20:25.944697] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:10.153 [2024-11-29 03:20:25.944995] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:10.153 [2024-11-29 03:20:25.945014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.153 [2024-11-29 03:20:25.945050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:10.153 [2024-11-29 03:20:25.945070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:33:10.153 [2024-11-29 03:20:25.945078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.153 [2024-11-29 03:20:25.945355] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:10.153 [2024-11-29 03:20:25.945388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.153 [2024-11-29 03:20:25.945397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:10.153 [2024-11-29 03:20:25.945407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:10.153 [2024-11-29 03:20:25.945418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.153 [2024-11-29 03:20:25.945474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.153 [2024-11-29 03:20:25.945484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:10.153 [2024-11-29 03:20:25.945492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:33:10.153 [2024-11-29 03:20:25.945499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.153 [2024-11-29 03:20:25.945747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.153 [2024-11-29 03:20:25.945765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:10.153 [2024-11-29 03:20:25.945774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:33:10.153 [2024-11-29 03:20:25.945782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.153 [2024-11-29 03:20:25.945930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.153 [2024-11-29 03:20:25.945974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:10.153 [2024-11-29 03:20:25.945987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:33:10.153 [2024-11-29 03:20:25.945996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.153 [2024-11-29 03:20:25.946020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.153 [2024-11-29 03:20:25.946029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:10.153 [2024-11-29 03:20:25.946038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:10.153 [2024-11-29 03:20:25.946046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.154 [2024-11-29 03:20:25.946067] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:10.154 [2024-11-29 03:20:25.948166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.154 [2024-11-29 03:20:25.948342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:10.154 [2024-11-29 03:20:25.948367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.104 ms 00:33:10.154 [2024-11-29 03:20:25.948375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.154 [2024-11-29 03:20:25.948414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.154 [2024-11-29 03:20:25.948422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:10.154 [2024-11-29 03:20:25.948436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:33:10.154 [2024-11-29 03:20:25.948446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.154 [2024-11-29 03:20:25.948499] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:10.154 [2024-11-29 03:20:25.948522] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:10.154 [2024-11-29 03:20:25.948570] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:10.154 [2024-11-29 03:20:25.948591] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:10.154 [2024-11-29 03:20:25.948695] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:10.154 [2024-11-29 03:20:25.948711] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:10.154 [2024-11-29 03:20:25.948721] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:10.154 [2024-11-29 03:20:25.948731] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:10.154 [2024-11-29 03:20:25.948742] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:10.154 [2024-11-29 03:20:25.948754] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:10.154 [2024-11-29 03:20:25.948763] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:10.154 [2024-11-29 03:20:25.948775] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:10.154 [2024-11-29 03:20:25.948782] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:10.154 [2024-11-29 03:20:25.948790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.154 [2024-11-29 03:20:25.948797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:10.154 [2024-11-29 03:20:25.948805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:33:10.154 [2024-11-29 03:20:25.948811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.154 [2024-11-29 03:20:25.948920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.154 [2024-11-29 03:20:25.948931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:10.154 [2024-11-29 03:20:25.948942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:10.154 [2024-11-29 03:20:25.948950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.154 [2024-11-29 03:20:25.949063] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:10.154 [2024-11-29 03:20:25.949075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:10.154 [2024-11-29 03:20:25.949084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:10.154 [2024-11-29 03:20:25.949096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.154 [2024-11-29 03:20:25.949104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:10.154 [2024-11-29 03:20:25.949112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:10.154 [2024-11-29 03:20:25.949120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:10.154 [2024-11-29 03:20:25.949128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:10.154 [2024-11-29 03:20:25.949137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:10.154 [2024-11-29 03:20:25.949144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:10.154 [2024-11-29 03:20:25.949152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:10.154 [2024-11-29 03:20:25.949162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:10.154 [2024-11-29 03:20:25.949170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:10.154 [2024-11-29 03:20:25.949178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:10.154 [2024-11-29 03:20:25.949186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:10.154 [2024-11-29 03:20:25.949193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.154 [2024-11-29 03:20:25.949235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:10.154 [2024-11-29 03:20:25.949243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:10.154 [2024-11-29 03:20:25.949250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.154 [2024-11-29 03:20:25.949260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:10.154 [2024-11-29 03:20:25.949268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:10.154 [2024-11-29 03:20:25.949275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:10.154 [2024-11-29 03:20:25.949282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:10.154 [2024-11-29 03:20:25.949288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:10.154 [2024-11-29 03:20:25.949295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:10.154 [2024-11-29 03:20:25.949302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:10.154 [2024-11-29 03:20:25.949309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:10.154 [2024-11-29 03:20:25.949315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:10.154 [2024-11-29 03:20:25.949321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:10.154 [2024-11-29 03:20:25.949328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:10.154 [2024-11-29 03:20:25.949335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:10.154 [2024-11-29 03:20:25.949341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:10.154 [2024-11-29 03:20:25.949348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:10.154 [2024-11-29 03:20:25.949354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:10.154 [2024-11-29 03:20:25.949360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:10.154 [2024-11-29 03:20:25.949370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:10.154 [2024-11-29 03:20:25.949377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:10.154 [2024-11-29 03:20:25.949383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:10.154 [2024-11-29 03:20:25.949390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:10.154 [2024-11-29 03:20:25.949396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.154 [2024-11-29 03:20:25.949403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:10.154 [2024-11-29 03:20:25.949409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:10.154 [2024-11-29 03:20:25.949416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.154 [2024-11-29 03:20:25.949425] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:10.154 [2024-11-29 03:20:25.949433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:10.154 [2024-11-29 03:20:25.949440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:10.154 [2024-11-29 03:20:25.949450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.154 [2024-11-29 03:20:25.949458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:10.154 [2024-11-29 03:20:25.949465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:10.154 [2024-11-29 03:20:25.949471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:10.154 [2024-11-29 03:20:25.949478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:10.154 [2024-11-29 03:20:25.949486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:10.154 [2024-11-29 03:20:25.949493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:10.154 [2024-11-29 03:20:25.949502] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:10.154 [2024-11-29 03:20:25.949511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:10.154 [2024-11-29 03:20:25.949523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:10.154 [2024-11-29 03:20:25.949530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:10.154 [2024-11-29 03:20:25.949537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:10.154 [2024-11-29 03:20:25.949544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:10.154 [2024-11-29 03:20:25.949552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:10.154 [2024-11-29 03:20:25.949559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:10.154 [2024-11-29 03:20:25.949567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:10.154 [2024-11-29 03:20:25.949575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:10.154 [2024-11-29 03:20:25.949582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:10.154 [2024-11-29 03:20:25.949589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:10.154 [2024-11-29 03:20:25.949596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:10.154 [2024-11-29 03:20:25.949609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:10.154 [2024-11-29 03:20:25.949618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:10.154 [2024-11-29 03:20:25.949625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:10.155 [2024-11-29 03:20:25.949632] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:10.155 [2024-11-29 03:20:25.949640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:10.155 [2024-11-29 03:20:25.949648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:10.155 [2024-11-29 03:20:25.949655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:10.155 [2024-11-29 03:20:25.949662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:10.155 [2024-11-29 03:20:25.949669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:10.155 [2024-11-29 03:20:25.949679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:25.949687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:10.155 [2024-11-29 03:20:25.949694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:33:10.155 [2024-11-29 03:20:25.949705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:25.959343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:25.959520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:10.155 [2024-11-29 03:20:25.959537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.597 ms 00:33:10.155 [2024-11-29 03:20:25.959545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:25.959633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:25.959648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:10.155 [2024-11-29 03:20:25.959657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:33:10.155 [2024-11-29 03:20:25.959664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:25.982035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:25.982087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:10.155 [2024-11-29 03:20:25.982100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.312 ms 00:33:10.155 [2024-11-29 03:20:25.982109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:25.982154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:25.982163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:10.155 [2024-11-29 03:20:25.982173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:10.155 [2024-11-29 03:20:25.982181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:25.982289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:25.982307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:10.155 [2024-11-29 03:20:25.982316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:33:10.155 [2024-11-29 03:20:25.982324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:25.982451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:25.982464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:10.155 [2024-11-29 03:20:25.982473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:33:10.155 [2024-11-29 03:20:25.982484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:25.990454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:25.990502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:10.155 [2024-11-29 03:20:25.990520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.946 ms 00:33:10.155 [2024-11-29 03:20:25.990529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:25.990649] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:10.155 [2024-11-29 03:20:25.990665] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:10.155 [2024-11-29 03:20:25.990675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:25.990684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:10.155 [2024-11-29 03:20:25.990694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:33:10.155 [2024-11-29 03:20:25.990706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:26.003710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:26.003756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:10.155 [2024-11-29 03:20:26.003767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.987 ms 00:33:10.155 [2024-11-29 03:20:26.003775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:26.003918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:26.003929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:10.155 [2024-11-29 03:20:26.003938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:33:10.155 [2024-11-29 03:20:26.003949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:26.003997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:26.004011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:10.155 [2024-11-29 03:20:26.004020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:10.155 [2024-11-29 03:20:26.004033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:26.004333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:26.004353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:10.155 [2024-11-29 03:20:26.004362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:33:10.155 [2024-11-29 03:20:26.004369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:26.004393] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:10.155 [2024-11-29 03:20:26.004403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:26.004417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:10.155 [2024-11-29 03:20:26.004424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:10.155 [2024-11-29 03:20:26.004432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:26.013695] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:10.155 [2024-11-29 03:20:26.013874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:26.013885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:10.155 [2024-11-29 03:20:26.013895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.424 ms 00:33:10.155 [2024-11-29 03:20:26.013903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:26.016330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:26.016368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:10.155 [2024-11-29 03:20:26.016377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.395 ms 00:33:10.155 [2024-11-29 03:20:26.016385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:26.016462] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:33:10.155 [2024-11-29 03:20:26.017055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:26.017073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:10.155 [2024-11-29 03:20:26.017083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:33:10.155 [2024-11-29 03:20:26.017093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:26.017119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:26.017134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:10.155 [2024-11-29 03:20:26.017142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:10.155 [2024-11-29 03:20:26.017150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:26.017184] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:10.155 [2024-11-29 03:20:26.017194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:26.017201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:10.155 [2024-11-29 03:20:26.017209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:10.155 [2024-11-29 03:20:26.017218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:26.023216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:26.023265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:10.155 [2024-11-29 03:20:26.023276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.981 ms 00:33:10.155 [2024-11-29 03:20:26.023285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:26.023369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.155 [2024-11-29 03:20:26.023379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:10.155 [2024-11-29 03:20:26.023393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:33:10.155 [2024-11-29 03:20:26.023404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.155 [2024-11-29 03:20:26.024788] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 79.779 ms, result 0 00:33:11.544  [2024-11-29T03:20:28.481Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-29T03:20:29.426Z] Copying: 23/1024 [MB] (10 MBps) [2024-11-29T03:20:30.437Z] Copying: 34/1024 [MB] (11 MBps) [2024-11-29T03:20:31.382Z] Copying: 44/1024 [MB] (10 MBps) [2024-11-29T03:20:32.326Z] Copying: 55/1024 [MB] (10 MBps) [2024-11-29T03:20:33.269Z] Copying: 66/1024 [MB] (10 MBps) [2024-11-29T03:20:34.225Z] Copying: 87/1024 [MB] (21 MBps) [2024-11-29T03:20:35.611Z] Copying: 102/1024 [MB] (14 MBps) [2024-11-29T03:20:36.554Z] Copying: 122/1024 [MB] (20 MBps) [2024-11-29T03:20:37.498Z] Copying: 140/1024 [MB] (17 MBps) [2024-11-29T03:20:38.443Z] Copying: 155/1024 [MB] (15 MBps) [2024-11-29T03:20:39.384Z] Copying: 169/1024 [MB] (13 MBps) [2024-11-29T03:20:40.330Z] Copying: 187/1024 [MB] (18 MBps) [2024-11-29T03:20:41.273Z] Copying: 208/1024 [MB] (20 MBps) [2024-11-29T03:20:42.219Z] Copying: 219/1024 [MB] (11 MBps) [2024-11-29T03:20:43.605Z] Copying: 233/1024 [MB] (14 MBps) [2024-11-29T03:20:44.548Z] Copying: 257/1024 [MB] (23 MBps) [2024-11-29T03:20:45.493Z] Copying: 279/1024 [MB] (21 MBps) [2024-11-29T03:20:46.436Z] Copying: 297/1024 [MB] (18 MBps) [2024-11-29T03:20:47.379Z] Copying: 313/1024 [MB] (16 MBps) [2024-11-29T03:20:48.324Z] Copying: 337/1024 [MB] (23 MBps) [2024-11-29T03:20:49.266Z] Copying: 356/1024 [MB] (19 MBps) [2024-11-29T03:20:50.650Z] Copying: 377/1024 [MB] (21 MBps) [2024-11-29T03:20:51.223Z] Copying: 399/1024 [MB] (21 MBps) [2024-11-29T03:20:52.610Z] Copying: 432/1024 [MB] (33 MBps) [2024-11-29T03:20:53.554Z] Copying: 462/1024 [MB] (30 MBps) [2024-11-29T03:20:54.500Z] Copying: 480/1024 [MB] (18 MBps) [2024-11-29T03:20:55.446Z] Copying: 491/1024 [MB] (10 MBps) [2024-11-29T03:20:56.391Z] Copying: 507/1024 [MB] (16 MBps) [2024-11-29T03:20:57.332Z] Copying: 519/1024 [MB] (11 MBps) [2024-11-29T03:20:58.274Z] Copying: 533/1024 [MB] (14 MBps) [2024-11-29T03:20:59.218Z] Copying: 549/1024 [MB] (15 MBps) [2024-11-29T03:21:00.606Z] Copying: 566/1024 [MB] (16 MBps) [2024-11-29T03:21:01.571Z] Copying: 584/1024 [MB] (17 MBps) [2024-11-29T03:21:02.235Z] Copying: 595/1024 [MB] (11 MBps) [2024-11-29T03:21:03.624Z] Copying: 614/1024 [MB] (19 MBps) [2024-11-29T03:21:04.570Z] Copying: 626/1024 [MB] (12 MBps) [2024-11-29T03:21:05.512Z] Copying: 643/1024 [MB] (16 MBps) [2024-11-29T03:21:06.454Z] Copying: 655/1024 [MB] (12 MBps) [2024-11-29T03:21:07.397Z] Copying: 667/1024 [MB] (11 MBps) [2024-11-29T03:21:08.342Z] Copying: 678/1024 [MB] (10 MBps) [2024-11-29T03:21:09.286Z] Copying: 688/1024 [MB] (10 MBps) [2024-11-29T03:21:10.232Z] Copying: 699/1024 [MB] (11 MBps) [2024-11-29T03:21:11.622Z] Copying: 711/1024 [MB] (11 MBps) [2024-11-29T03:21:12.568Z] Copying: 725/1024 [MB] (14 MBps) [2024-11-29T03:21:13.515Z] Copying: 736/1024 [MB] (10 MBps) [2024-11-29T03:21:14.461Z] Copying: 746/1024 [MB] (10 MBps) [2024-11-29T03:21:15.409Z] Copying: 757/1024 [MB] (11 MBps) [2024-11-29T03:21:16.353Z] Copying: 769/1024 [MB] (11 MBps) [2024-11-29T03:21:17.301Z] Copying: 781/1024 [MB] (11 MBps) [2024-11-29T03:21:18.247Z] Copying: 792/1024 [MB] (11 MBps) [2024-11-29T03:21:19.639Z] Copying: 804/1024 [MB] (11 MBps) [2024-11-29T03:21:20.585Z] Copying: 815/1024 [MB] (11 MBps) [2024-11-29T03:21:21.527Z] Copying: 826/1024 [MB] (11 MBps) [2024-11-29T03:21:22.470Z] Copying: 837/1024 [MB] (10 MBps) [2024-11-29T03:21:23.414Z] Copying: 856/1024 [MB] (18 MBps) [2024-11-29T03:21:24.356Z] Copying: 866/1024 [MB] (10 MBps) [2024-11-29T03:21:25.298Z] Copying: 878/1024 [MB] (11 MBps) [2024-11-29T03:21:26.237Z] Copying: 896/1024 [MB] (17 MBps) [2024-11-29T03:21:27.622Z] Copying: 916/1024 [MB] (20 MBps) [2024-11-29T03:21:28.563Z] Copying: 937/1024 [MB] (20 MBps) [2024-11-29T03:21:29.509Z] Copying: 960/1024 [MB] (22 MBps) [2024-11-29T03:21:30.454Z] Copying: 980/1024 [MB] (20 MBps) [2024-11-29T03:21:31.397Z] Copying: 1002/1024 [MB] (21 MBps) [2024-11-29T03:21:31.659Z] Copying: 1021/1024 [MB] (19 MBps) [2024-11-29T03:21:31.659Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-29 03:21:31.510482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.667 [2024-11-29 03:21:31.510761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:15.667 [2024-11-29 03:21:31.510939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:15.667 [2024-11-29 03:21:31.510981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.667 [2024-11-29 03:21:31.511034] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:15.667 [2024-11-29 03:21:31.511578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.667 [2024-11-29 03:21:31.511681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:15.667 [2024-11-29 03:21:31.511759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:34:15.667 [2024-11-29 03:21:31.511787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.667 [2024-11-29 03:21:31.512098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.667 [2024-11-29 03:21:31.512132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:15.667 [2024-11-29 03:21:31.512157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:34:15.667 [2024-11-29 03:21:31.512225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.667 [2024-11-29 03:21:31.512287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.667 [2024-11-29 03:21:31.512315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:15.667 [2024-11-29 03:21:31.512340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:15.667 [2024-11-29 03:21:31.512363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.667 [2024-11-29 03:21:31.512419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.667 [2024-11-29 03:21:31.512433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:15.667 [2024-11-29 03:21:31.512447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:34:15.667 [2024-11-29 03:21:31.512460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.667 [2024-11-29 03:21:31.512476] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:15.667 [2024-11-29 03:21:31.512490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:34:15.667 [2024-11-29 03:21:31.512503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:15.667 [2024-11-29 03:21:31.512513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:15.667 [2024-11-29 03:21:31.512522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:15.667 [2024-11-29 03:21:31.512531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:15.667 [2024-11-29 03:21:31.512541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.512997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:15.668 [2024-11-29 03:21:31.513425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:15.669 [2024-11-29 03:21:31.513434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:15.669 [2024-11-29 03:21:31.513444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:15.669 [2024-11-29 03:21:31.513453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:15.669 [2024-11-29 03:21:31.513463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:15.669 [2024-11-29 03:21:31.513482] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:15.669 [2024-11-29 03:21:31.513495] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 93a08837-8e91-4216-aa7c-0f9d19487196 00:34:15.669 [2024-11-29 03:21:31.513505] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:34:15.669 [2024-11-29 03:21:31.513514] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1056 00:34:15.669 [2024-11-29 03:21:31.513523] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1024 00:34:15.669 [2024-11-29 03:21:31.513533] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0312 00:34:15.669 [2024-11-29 03:21:31.513548] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:15.669 [2024-11-29 03:21:31.513557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:15.669 [2024-11-29 03:21:31.513567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:15.669 [2024-11-29 03:21:31.513575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:15.669 [2024-11-29 03:21:31.513583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:15.669 [2024-11-29 03:21:31.513592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.669 [2024-11-29 03:21:31.513601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:15.669 [2024-11-29 03:21:31.513611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:34:15.669 [2024-11-29 03:21:31.513619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.669 [2024-11-29 03:21:31.516095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.669 [2024-11-29 03:21:31.516189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:15.669 [2024-11-29 03:21:31.516243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.458 ms 00:34:15.669 [2024-11-29 03:21:31.516264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.669 [2024-11-29 03:21:31.516364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.669 [2024-11-29 03:21:31.516388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:15.669 [2024-11-29 03:21:31.516408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:34:15.669 [2024-11-29 03:21:31.516426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.669 [2024-11-29 03:21:31.521474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.669 [2024-11-29 03:21:31.521584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:15.669 [2024-11-29 03:21:31.521639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.669 [2024-11-29 03:21:31.521661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.669 [2024-11-29 03:21:31.521723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.669 [2024-11-29 03:21:31.521750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:15.669 [2024-11-29 03:21:31.521769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.669 [2024-11-29 03:21:31.521787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.669 [2024-11-29 03:21:31.521862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.669 [2024-11-29 03:21:31.522023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:15.669 [2024-11-29 03:21:31.522047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.669 [2024-11-29 03:21:31.522069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.669 [2024-11-29 03:21:31.522097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.669 [2024-11-29 03:21:31.522117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:15.669 [2024-11-29 03:21:31.522136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.669 [2024-11-29 03:21:31.522197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.669 [2024-11-29 03:21:31.531462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.669 [2024-11-29 03:21:31.531598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:15.669 [2024-11-29 03:21:31.531643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.669 [2024-11-29 03:21:31.531665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.669 [2024-11-29 03:21:31.539499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.669 [2024-11-29 03:21:31.539618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:15.669 [2024-11-29 03:21:31.539664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.669 [2024-11-29 03:21:31.539685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.669 [2024-11-29 03:21:31.539735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.669 [2024-11-29 03:21:31.539757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:15.669 [2024-11-29 03:21:31.539777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.669 [2024-11-29 03:21:31.539800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.669 [2024-11-29 03:21:31.539845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.669 [2024-11-29 03:21:31.539867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:15.669 [2024-11-29 03:21:31.539924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.669 [2024-11-29 03:21:31.539945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.669 [2024-11-29 03:21:31.540007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.669 [2024-11-29 03:21:31.540029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:15.669 [2024-11-29 03:21:31.540048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.669 [2024-11-29 03:21:31.540065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.669 [2024-11-29 03:21:31.540106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.669 [2024-11-29 03:21:31.540219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:15.669 [2024-11-29 03:21:31.540251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.669 [2024-11-29 03:21:31.540268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.669 [2024-11-29 03:21:31.540319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.669 [2024-11-29 03:21:31.540616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:15.669 [2024-11-29 03:21:31.540626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.669 [2024-11-29 03:21:31.540633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.669 [2024-11-29 03:21:31.540676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.669 [2024-11-29 03:21:31.540685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:15.669 [2024-11-29 03:21:31.540693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.669 [2024-11-29 03:21:31.540700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.669 [2024-11-29 03:21:31.540820] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 30.309 ms, result 0 00:34:15.931 00:34:15.931 00:34:15.931 03:21:31 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:17.898 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:17.898 Process with pid 94726 is not found 00:34:17.898 Remove shared memory files 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 94726 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94726 ']' 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94726 00:34:17.898 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94726) - No such process 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 94726 is not found' 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_93a08837-8e91-4216-aa7c-0f9d19487196_band_md /dev/hugepages/ftl_93a08837-8e91-4216-aa7c-0f9d19487196_l2p_l1 /dev/hugepages/ftl_93a08837-8e91-4216-aa7c-0f9d19487196_l2p_l2 /dev/hugepages/ftl_93a08837-8e91-4216-aa7c-0f9d19487196_l2p_l2_ctx /dev/hugepages/ftl_93a08837-8e91-4216-aa7c-0f9d19487196_nvc_md /dev/hugepages/ftl_93a08837-8e91-4216-aa7c-0f9d19487196_p2l_pool /dev/hugepages/ftl_93a08837-8e91-4216-aa7c-0f9d19487196_sb /dev/hugepages/ftl_93a08837-8e91-4216-aa7c-0f9d19487196_sb_shm /dev/hugepages/ftl_93a08837-8e91-4216-aa7c-0f9d19487196_trim_bitmap /dev/hugepages/ftl_93a08837-8e91-4216-aa7c-0f9d19487196_trim_log /dev/hugepages/ftl_93a08837-8e91-4216-aa7c-0f9d19487196_trim_md /dev/hugepages/ftl_93a08837-8e91-4216-aa7c-0f9d19487196_vmap 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:17.898 00:34:17.898 real 4m34.408s 00:34:17.898 user 4m22.452s 00:34:17.898 sys 0m11.657s 00:34:17.898 ************************************ 00:34:17.898 END TEST ftl_restore_fast 00:34:17.898 ************************************ 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:17.898 03:21:33 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:17.898 03:21:33 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:17.898 03:21:33 ftl -- ftl/ftl.sh@14 -- # killprocess 85874 00:34:17.898 03:21:33 ftl -- common/autotest_common.sh@954 -- # '[' -z 85874 ']' 00:34:17.898 Process with pid 85874 is not found 00:34:17.898 03:21:33 ftl -- common/autotest_common.sh@958 -- # kill -0 85874 00:34:17.898 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85874) - No such process 00:34:17.898 03:21:33 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 85874 is not found' 00:34:17.898 03:21:33 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:17.898 03:21:33 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=97516 00:34:17.898 03:21:33 ftl -- ftl/ftl.sh@20 -- # waitforlisten 97516 00:34:17.898 03:21:33 ftl -- common/autotest_common.sh@835 -- # '[' -z 97516 ']' 00:34:17.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.898 03:21:33 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.898 03:21:33 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:17.898 03:21:33 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:17.898 03:21:33 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.898 03:21:33 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:17.898 03:21:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:17.898 [2024-11-29 03:21:33.737100] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:34:17.898 [2024-11-29 03:21:33.737255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97516 ] 00:34:18.185 [2024-11-29 03:21:33.878381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:18.185 [2024-11-29 03:21:33.907524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.762 03:21:34 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:18.762 03:21:34 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:18.762 03:21:34 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:19.024 nvme0n1 00:34:19.024 03:21:34 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:19.024 03:21:34 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:19.024 03:21:34 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:19.289 03:21:35 ftl -- ftl/common.sh@28 -- # stores=5f4136df-cb44-494d-9a90-f61ba81d6da6 00:34:19.289 03:21:35 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:19.289 03:21:35 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5f4136df-cb44-494d-9a90-f61ba81d6da6 00:34:19.550 03:21:35 ftl -- ftl/ftl.sh@23 -- # killprocess 97516 00:34:19.550 03:21:35 ftl -- common/autotest_common.sh@954 -- # '[' -z 97516 ']' 00:34:19.550 03:21:35 ftl -- common/autotest_common.sh@958 -- # kill -0 97516 00:34:19.550 03:21:35 ftl -- common/autotest_common.sh@959 -- # uname 00:34:19.550 03:21:35 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:19.550 03:21:35 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97516 00:34:19.550 03:21:35 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:19.550 killing process with pid 97516 00:34:19.550 03:21:35 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:19.550 03:21:35 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97516' 00:34:19.550 03:21:35 ftl -- common/autotest_common.sh@973 -- # kill 97516 00:34:19.550 03:21:35 ftl -- common/autotest_common.sh@978 -- # wait 97516 00:34:19.810 03:21:35 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:20.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:20.071 Waiting for block devices as requested 00:34:20.071 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:20.333 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:20.333 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:20.333 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:25.623 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:25.623 03:21:41 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:25.623 Remove shared memory files 00:34:25.623 03:21:41 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:25.623 03:21:41 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:25.623 03:21:41 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:25.623 03:21:41 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:25.623 03:21:41 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:25.623 03:21:41 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:25.623 00:34:25.623 real 17m40.603s 00:34:25.623 user 19m28.266s 00:34:25.623 sys 1m31.008s 00:34:25.623 03:21:41 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:25.623 ************************************ 00:34:25.623 END TEST ftl 00:34:25.623 ************************************ 00:34:25.623 03:21:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:25.623 03:21:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:25.623 03:21:41 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:25.623 03:21:41 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:25.623 03:21:41 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:25.623 03:21:41 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:25.623 03:21:41 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:25.623 03:21:41 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:25.623 03:21:41 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:25.623 03:21:41 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:25.623 03:21:41 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:25.623 03:21:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:25.623 03:21:41 -- common/autotest_common.sh@10 -- # set +x 00:34:25.623 03:21:41 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:25.623 03:21:41 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:25.623 03:21:41 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:25.623 03:21:41 -- common/autotest_common.sh@10 -- # set +x 00:34:27.010 INFO: APP EXITING 00:34:27.010 INFO: killing all VMs 00:34:27.010 INFO: killing vhost app 00:34:27.010 INFO: EXIT DONE 00:34:27.271 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:27.842 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:27.842 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:27.842 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:27.842 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:28.103 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:28.674 Cleaning 00:34:28.674 Removing: /var/run/dpdk/spdk0/config 00:34:28.674 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:28.674 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:28.674 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:28.674 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:28.674 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:28.674 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:28.674 Removing: /var/run/dpdk/spdk0 00:34:28.674 Removing: /var/run/dpdk/spdk_pid68873 00:34:28.674 Removing: /var/run/dpdk/spdk_pid69031 00:34:28.674 Removing: /var/run/dpdk/spdk_pid69227 00:34:28.674 Removing: /var/run/dpdk/spdk_pid69309 00:34:28.674 Removing: /var/run/dpdk/spdk_pid69338 00:34:28.674 Removing: /var/run/dpdk/spdk_pid69449 00:34:28.674 Removing: /var/run/dpdk/spdk_pid69462 00:34:28.674 Removing: /var/run/dpdk/spdk_pid69644 00:34:28.674 Removing: /var/run/dpdk/spdk_pid69718 00:34:28.674 Removing: /var/run/dpdk/spdk_pid69797 00:34:28.674 Removing: /var/run/dpdk/spdk_pid69897 00:34:28.674 Removing: /var/run/dpdk/spdk_pid69978 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70012 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70048 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70119 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70192 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70617 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70659 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70705 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70720 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70774 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70790 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70848 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70864 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70906 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70924 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70966 00:34:28.674 Removing: /var/run/dpdk/spdk_pid70984 00:34:28.674 Removing: /var/run/dpdk/spdk_pid71111 00:34:28.674 Removing: /var/run/dpdk/spdk_pid71142 00:34:28.674 Removing: /var/run/dpdk/spdk_pid71231 00:34:28.674 Removing: /var/run/dpdk/spdk_pid71392 00:34:28.674 Removing: /var/run/dpdk/spdk_pid71454 00:34:28.674 Removing: /var/run/dpdk/spdk_pid71485 00:34:28.674 Removing: /var/run/dpdk/spdk_pid71920 00:34:28.674 Removing: /var/run/dpdk/spdk_pid72007 00:34:28.674 Removing: /var/run/dpdk/spdk_pid72100 00:34:28.674 Removing: /var/run/dpdk/spdk_pid72142 00:34:28.674 Removing: /var/run/dpdk/spdk_pid72167 00:34:28.674 Removing: /var/run/dpdk/spdk_pid72240 00:34:28.674 Removing: /var/run/dpdk/spdk_pid72852 00:34:28.674 Removing: /var/run/dpdk/spdk_pid72883 00:34:28.674 Removing: /var/run/dpdk/spdk_pid73350 00:34:28.674 Removing: /var/run/dpdk/spdk_pid73443 00:34:28.674 Removing: /var/run/dpdk/spdk_pid73541 00:34:28.674 Removing: /var/run/dpdk/spdk_pid73583 00:34:28.674 Removing: /var/run/dpdk/spdk_pid73603 00:34:28.674 Removing: /var/run/dpdk/spdk_pid73623 00:34:28.674 Removing: /var/run/dpdk/spdk_pid75442 00:34:28.674 Removing: /var/run/dpdk/spdk_pid75563 00:34:28.674 Removing: /var/run/dpdk/spdk_pid75567 00:34:28.674 Removing: /var/run/dpdk/spdk_pid75579 00:34:28.674 Removing: /var/run/dpdk/spdk_pid75623 00:34:28.674 Removing: /var/run/dpdk/spdk_pid75627 00:34:28.674 Removing: /var/run/dpdk/spdk_pid75639 00:34:28.674 Removing: /var/run/dpdk/spdk_pid75678 00:34:28.674 Removing: /var/run/dpdk/spdk_pid75682 00:34:28.674 Removing: /var/run/dpdk/spdk_pid75694 00:34:28.674 Removing: /var/run/dpdk/spdk_pid75734 00:34:28.674 Removing: /var/run/dpdk/spdk_pid75738 00:34:28.674 Removing: /var/run/dpdk/spdk_pid75750 00:34:28.674 Removing: /var/run/dpdk/spdk_pid77126 00:34:28.674 Removing: /var/run/dpdk/spdk_pid77212 00:34:28.674 Removing: /var/run/dpdk/spdk_pid78604 00:34:28.674 Removing: /var/run/dpdk/spdk_pid80339 00:34:28.674 Removing: /var/run/dpdk/spdk_pid80394 00:34:28.674 Removing: /var/run/dpdk/spdk_pid80466 00:34:28.674 Removing: /var/run/dpdk/spdk_pid80559 00:34:28.674 Removing: /var/run/dpdk/spdk_pid80645 00:34:28.674 Removing: /var/run/dpdk/spdk_pid80734 00:34:28.674 Removing: /var/run/dpdk/spdk_pid80792 00:34:28.674 Removing: /var/run/dpdk/spdk_pid80857 00:34:28.674 Removing: /var/run/dpdk/spdk_pid80956 00:34:28.674 Removing: /var/run/dpdk/spdk_pid81036 00:34:28.674 Removing: /var/run/dpdk/spdk_pid81121 00:34:28.674 Removing: /var/run/dpdk/spdk_pid81184 00:34:28.674 Removing: /var/run/dpdk/spdk_pid81249 00:34:28.674 Removing: /var/run/dpdk/spdk_pid81350 00:34:28.674 Removing: /var/run/dpdk/spdk_pid81435 00:34:28.674 Removing: /var/run/dpdk/spdk_pid81515 00:34:28.674 Removing: /var/run/dpdk/spdk_pid81573 00:34:28.674 Removing: /var/run/dpdk/spdk_pid81641 00:34:28.935 Removing: /var/run/dpdk/spdk_pid81741 00:34:28.935 Removing: /var/run/dpdk/spdk_pid81822 00:34:28.935 Removing: /var/run/dpdk/spdk_pid81906 00:34:28.935 Removing: /var/run/dpdk/spdk_pid81964 00:34:28.935 Removing: /var/run/dpdk/spdk_pid82027 00:34:28.935 Removing: /var/run/dpdk/spdk_pid82092 00:34:28.935 Removing: /var/run/dpdk/spdk_pid82161 00:34:28.935 Removing: /var/run/dpdk/spdk_pid82253 00:34:28.935 Removing: /var/run/dpdk/spdk_pid82336 00:34:28.935 Removing: /var/run/dpdk/spdk_pid82425 00:34:28.935 Removing: /var/run/dpdk/spdk_pid82484 00:34:28.935 Removing: /var/run/dpdk/spdk_pid82550 00:34:28.935 Removing: /var/run/dpdk/spdk_pid82614 00:34:28.935 Removing: /var/run/dpdk/spdk_pid82683 00:34:28.935 Removing: /var/run/dpdk/spdk_pid82781 00:34:28.935 Removing: /var/run/dpdk/spdk_pid82855 00:34:28.935 Removing: /var/run/dpdk/spdk_pid82999 00:34:28.935 Removing: /var/run/dpdk/spdk_pid83261 00:34:28.935 Removing: /var/run/dpdk/spdk_pid83292 00:34:28.935 Removing: /var/run/dpdk/spdk_pid83734 00:34:28.935 Removing: /var/run/dpdk/spdk_pid83910 00:34:28.935 Removing: /var/run/dpdk/spdk_pid84004 00:34:28.935 Removing: /var/run/dpdk/spdk_pid84109 00:34:28.935 Removing: /var/run/dpdk/spdk_pid84147 00:34:28.935 Removing: /var/run/dpdk/spdk_pid84172 00:34:28.935 Removing: /var/run/dpdk/spdk_pid84464 00:34:28.935 Removing: /var/run/dpdk/spdk_pid84508 00:34:28.935 Removing: /var/run/dpdk/spdk_pid84564 00:34:28.935 Removing: /var/run/dpdk/spdk_pid84930 00:34:28.935 Removing: /var/run/dpdk/spdk_pid85075 00:34:28.935 Removing: /var/run/dpdk/spdk_pid85874 00:34:28.935 Removing: /var/run/dpdk/spdk_pid85993 00:34:28.935 Removing: /var/run/dpdk/spdk_pid86151 00:34:28.935 Removing: /var/run/dpdk/spdk_pid86237 00:34:28.935 Removing: /var/run/dpdk/spdk_pid86524 00:34:28.935 Removing: /var/run/dpdk/spdk_pid86778 00:34:28.935 Removing: /var/run/dpdk/spdk_pid87119 00:34:28.935 Removing: /var/run/dpdk/spdk_pid87291 00:34:28.935 Removing: /var/run/dpdk/spdk_pid87505 00:34:28.935 Removing: /var/run/dpdk/spdk_pid87551 00:34:28.935 Removing: /var/run/dpdk/spdk_pid87800 00:34:28.935 Removing: /var/run/dpdk/spdk_pid87821 00:34:28.935 Removing: /var/run/dpdk/spdk_pid87862 00:34:28.935 Removing: /var/run/dpdk/spdk_pid88121 00:34:28.935 Removing: /var/run/dpdk/spdk_pid88340 00:34:28.935 Removing: /var/run/dpdk/spdk_pid89028 00:34:28.935 Removing: /var/run/dpdk/spdk_pid89667 00:34:28.935 Removing: /var/run/dpdk/spdk_pid90276 00:34:28.935 Removing: /var/run/dpdk/spdk_pid91051 00:34:28.935 Removing: /var/run/dpdk/spdk_pid91182 00:34:28.935 Removing: /var/run/dpdk/spdk_pid91263 00:34:28.935 Removing: /var/run/dpdk/spdk_pid91748 00:34:28.935 Removing: /var/run/dpdk/spdk_pid91795 00:34:28.935 Removing: /var/run/dpdk/spdk_pid92541 00:34:28.935 Removing: /var/run/dpdk/spdk_pid93042 00:34:28.935 Removing: /var/run/dpdk/spdk_pid93800 00:34:28.935 Removing: /var/run/dpdk/spdk_pid93923 00:34:28.935 Removing: /var/run/dpdk/spdk_pid93954 00:34:28.935 Removing: /var/run/dpdk/spdk_pid94010 00:34:28.935 Removing: /var/run/dpdk/spdk_pid94059 00:34:28.935 Removing: /var/run/dpdk/spdk_pid94113 00:34:28.935 Removing: /var/run/dpdk/spdk_pid94301 00:34:28.935 Removing: /var/run/dpdk/spdk_pid94372 00:34:28.935 Removing: /var/run/dpdk/spdk_pid94428 00:34:28.935 Removing: /var/run/dpdk/spdk_pid94500 00:34:28.935 Removing: /var/run/dpdk/spdk_pid94535 00:34:28.935 Removing: /var/run/dpdk/spdk_pid94591 00:34:28.935 Removing: /var/run/dpdk/spdk_pid94726 00:34:28.935 Removing: /var/run/dpdk/spdk_pid94935 00:34:28.935 Removing: /var/run/dpdk/spdk_pid95491 00:34:28.935 Removing: /var/run/dpdk/spdk_pid96196 00:34:28.935 Removing: /var/run/dpdk/spdk_pid96817 00:34:28.935 Removing: /var/run/dpdk/spdk_pid97516 00:34:28.935 Clean 00:34:28.935 03:21:44 -- common/autotest_common.sh@1453 -- # return 0 00:34:28.935 03:21:44 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:28.935 03:21:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:28.935 03:21:44 -- common/autotest_common.sh@10 -- # set +x 00:34:29.201 03:21:44 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:29.201 03:21:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:29.201 03:21:44 -- common/autotest_common.sh@10 -- # set +x 00:34:29.201 03:21:45 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:29.201 03:21:45 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:29.201 03:21:45 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:29.201 03:21:45 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:29.201 03:21:45 -- spdk/autotest.sh@398 -- # hostname 00:34:29.201 03:21:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:29.201 geninfo: WARNING: invalid characters removed from testname! 00:34:55.795 03:22:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:58.346 03:22:14 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:00.260 03:22:16 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:02.808 03:22:18 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:04.704 03:22:20 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:06.609 03:22:22 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:09.160 03:22:24 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:09.160 03:22:24 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:09.160 03:22:24 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:09.160 03:22:24 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:09.160 03:22:24 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:09.160 03:22:24 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:09.160 + [[ -n 5766 ]] 00:35:09.160 + sudo kill 5766 00:35:09.171 [Pipeline] } 00:35:09.188 [Pipeline] // timeout 00:35:09.194 [Pipeline] } 00:35:09.209 [Pipeline] // stage 00:35:09.215 [Pipeline] } 00:35:09.230 [Pipeline] // catchError 00:35:09.240 [Pipeline] stage 00:35:09.242 [Pipeline] { (Stop VM) 00:35:09.266 [Pipeline] sh 00:35:09.581 + vagrant halt 00:35:12.125 ==> default: Halting domain... 00:35:18.817 [Pipeline] sh 00:35:19.099 + vagrant destroy -f 00:35:21.637 ==> default: Removing domain... 00:35:22.222 [Pipeline] sh 00:35:22.507 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:22.518 [Pipeline] } 00:35:22.534 [Pipeline] // stage 00:35:22.539 [Pipeline] } 00:35:22.552 [Pipeline] // dir 00:35:22.558 [Pipeline] } 00:35:22.572 [Pipeline] // wrap 00:35:22.579 [Pipeline] } 00:35:22.592 [Pipeline] // catchError 00:35:22.601 [Pipeline] stage 00:35:22.603 [Pipeline] { (Epilogue) 00:35:22.617 [Pipeline] sh 00:35:22.904 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:28.214 [Pipeline] catchError 00:35:28.217 [Pipeline] { 00:35:28.232 [Pipeline] sh 00:35:28.517 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:28.517 Artifacts sizes are good 00:35:28.528 [Pipeline] } 00:35:28.542 [Pipeline] // catchError 00:35:28.554 [Pipeline] archiveArtifacts 00:35:28.562 Archiving artifacts 00:35:28.674 [Pipeline] cleanWs 00:35:28.687 [WS-CLEANUP] Deleting project workspace... 00:35:28.687 [WS-CLEANUP] Deferred wipeout is used... 00:35:28.694 [WS-CLEANUP] done 00:35:28.696 [Pipeline] } 00:35:28.712 [Pipeline] // stage 00:35:28.717 [Pipeline] } 00:35:28.731 [Pipeline] // node 00:35:28.737 [Pipeline] End of Pipeline 00:35:28.778 Finished: SUCCESS